modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
boychaboy/MNLI_bert-large-uncased | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
boychaboy/SNLI_bert-base-cased | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
codesj/empathic-concern | [
"LABEL_0"
] | Entry not found | 15 |
digit82/dialog-sbert-base | null | Entry not found | 15 |
gurkan08/bert-turkish-text-classification | [
"ekonomi",
"spor",
"saglik",
"kultur_sanat",
"bilim_teknoloji",
"egitim"
] | ---
language: tr
---
# Turkish News Text Classification
Turkish text classification model obtained by fine-tuning the Turkish bert model (dbmdz/bert-base-turkish-cased)
# Dataset
Dataset consists of 11 classes were obtained from https://www.trthaber.com/. The model was created using the most distinctive 6 classes.
Dataset can be accessed at https://github.com/gurkan08/datasets/tree/master/trt_11_category.
label_dict = {
'LABEL_0': 'ekonomi',
'LABEL_1': 'spor',
'LABEL_2': 'saglik',
'LABEL_3': 'kultur_sanat',
'LABEL_4': 'bilim_teknoloji',
'LABEL_5': 'egitim'
}
70% of the data were used for training and 30% for testing.
train f1-weighted score = %97
test f1-weighted score = %94
# Usage
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("gurkan08/bert-turkish-text-classification")
model = AutoModelForSequenceClassification.from_pretrained("gurkan08/bert-turkish-text-classification")
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
text = ["Süper Lig'in 6. haftasında Sivasspor ile Çaykur Rizespor karşı karşıya geldi...",
"Son 24 saatte 69 kişi Kovid-19 nedeniyle yaşamını yitirdi, 1573 kişi iyileşti"]
out = nlp(text)
label_dict = {
'LABEL_0': 'ekonomi',
'LABEL_1': 'spor',
'LABEL_2': 'saglik',
'LABEL_3': 'kultur_sanat',
'LABEL_4': 'bilim_teknoloji',
'LABEL_5': 'egitim'
}
results = []
for result in out:
result['label'] = label_dict[result['label']]
results.append(result)
print(results)
# > [{'label': 'spor', 'score': 0.9992026090621948}, {'label': 'saglik', 'score': 0.9972177147865295}]
| 1,796 |
mrm8488/camembert-base-finetuned-pawsx-fr | null | ---
language: fr
datasets:
- xtreme
tags:
- nli
widget:
- text: "La première série a été mieux reçue par la critique que la seconde. La seconde série a été bien accueillie par la critique, mieux que la première."
---
# Camembert-base fine-tuned on PAWS-X-fr for Paraphrase Identification (NLI)
| 295 |
pmthangk09/bert-base-uncased-glue-cola | null | Entry not found | 15 |
razent/SciFive-large-Pubmed | null | ---
language:
- en
tags:
- token-classification
- text-classification
- question-answering
- text2text-generation
- text-generation
datasets:
- pubmed
---
# SciFive Pubmed Large
## Introduction
Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598)
Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_
## How to use
For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-Pubmed")
model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-Pubmed")
sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ."
text = "ncbi_ner: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
early_stopping=True
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(line)
``` | 1,379 |
tals/albert-xlarge-vitaminc-fever | [
"NOT ENOUGH INFO",
"REFUTES",
"SUPPORTS"
] | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
| 2,357 |
Farshid/distilbert-base-uncased-finetuned-financial_phrasebank | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-financial_phrasebank
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_75agree
metrics:
- name: Accuracy
type: accuracy
value: 0.944015444015444
- name: F1
type: f1
value: 0.9437595528186435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-financial_phrasebank
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5533
- Accuracy: 0.9440
- F1: 0.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0001 | 1.0 | 19 | 0.5826 | 0.9324 | 0.9334 |
| 0.011 | 2.0 | 38 | 0.5072 | 0.9382 | 0.9380 |
| 0.0007 | 3.0 | 57 | 0.5496 | 0.9382 | 0.9383 |
| 0.0004 | 4.0 | 76 | 0.5190 | 0.9421 | 0.9420 |
| 0.0 | 5.0 | 95 | 0.5611 | 0.9382 | 0.9388 |
| 0.0027 | 6.0 | 114 | 0.5734 | 0.9421 | 0.9414 |
| 0.0001 | 7.0 | 133 | 0.5333 | 0.9421 | 0.9424 |
| 0.0051 | 8.0 | 152 | 0.5648 | 0.9382 | 0.9390 |
| 0.0002 | 9.0 | 171 | 0.4934 | 0.9382 | 0.9385 |
| 0.005 | 10.0 | 190 | 0.5202 | 0.9344 | 0.9342 |
| 0.0146 | 11.0 | 209 | 0.4558 | 0.9479 | 0.9480 |
| 0.0002 | 12.0 | 228 | 0.4870 | 0.9421 | 0.9424 |
| 0.0049 | 13.0 | 247 | 0.4936 | 0.9440 | 0.9445 |
| 0.0007 | 14.0 | 266 | 0.5596 | 0.9363 | 0.9371 |
| 0.0009 | 15.0 | 285 | 0.4776 | 0.9479 | 0.9474 |
| 0.0 | 16.0 | 304 | 0.4737 | 0.9440 | 0.9438 |
| 0.0 | 17.0 | 323 | 0.4762 | 0.9479 | 0.9478 |
| 0.0 | 18.0 | 342 | 0.4826 | 0.9479 | 0.9478 |
| 0.0002 | 19.0 | 361 | 0.5324 | 0.9402 | 0.9395 |
| 0.0 | 20.0 | 380 | 0.5188 | 0.9498 | 0.9498 |
| 0.0 | 21.0 | 399 | 0.5327 | 0.9459 | 0.9461 |
| 0.0 | 22.0 | 418 | 0.5355 | 0.9459 | 0.9461 |
| 0.0 | 23.0 | 437 | 0.5369 | 0.9459 | 0.9461 |
| 0.0 | 24.0 | 456 | 0.5464 | 0.9440 | 0.9442 |
| 0.0 | 25.0 | 475 | 0.5468 | 0.9440 | 0.9442 |
| 0.0 | 26.0 | 494 | 0.5466 | 0.9440 | 0.9442 |
| 0.0 | 27.0 | 513 | 0.5471 | 0.9440 | 0.9442 |
| 0.0 | 28.0 | 532 | 0.5472 | 0.9440 | 0.9442 |
| 0.0 | 29.0 | 551 | 0.5481 | 0.9440 | 0.9442 |
| 0.0 | 30.0 | 570 | 0.5434 | 0.9459 | 0.9461 |
| 0.0 | 31.0 | 589 | 0.5433 | 0.9479 | 0.9479 |
| 0.0 | 32.0 | 608 | 0.5442 | 0.9479 | 0.9479 |
| 0.0 | 33.0 | 627 | 0.5456 | 0.9479 | 0.9479 |
| 0.0 | 34.0 | 646 | 0.5467 | 0.9479 | 0.9479 |
| 0.0 | 35.0 | 665 | 0.5482 | 0.9459 | 0.9461 |
| 0.0 | 36.0 | 684 | 0.5493 | 0.9459 | 0.9461 |
| 0.0 | 37.0 | 703 | 0.5497 | 0.9479 | 0.9479 |
| 0.0 | 38.0 | 722 | 0.5500 | 0.9479 | 0.9479 |
| 0.0 | 39.0 | 741 | 0.5517 | 0.9459 | 0.9461 |
| 0.0 | 40.0 | 760 | 0.5526 | 0.9459 | 0.9461 |
| 0.0 | 41.0 | 779 | 0.5517 | 0.9479 | 0.9479 |
| 0.0 | 42.0 | 798 | 0.5533 | 0.9479 | 0.9479 |
| 0.0 | 43.0 | 817 | 0.5555 | 0.9459 | 0.9461 |
| 0.0 | 44.0 | 836 | 0.5565 | 0.9459 | 0.9461 |
| 0.0 | 45.0 | 855 | 0.5571 | 0.9459 | 0.9461 |
| 0.0 | 46.0 | 874 | 0.5575 | 0.9459 | 0.9461 |
| 0.0 | 47.0 | 893 | 0.5593 | 0.9459 | 0.9461 |
| 0.0 | 48.0 | 912 | 0.5604 | 0.9459 | 0.9461 |
| 0.0 | 49.0 | 931 | 0.5611 | 0.9459 | 0.9461 |
| 0.0 | 50.0 | 950 | 0.5615 | 0.9459 | 0.9461 |
| 0.0 | 51.0 | 969 | 0.5621 | 0.9459 | 0.9461 |
| 0.0 | 52.0 | 988 | 0.5622 | 0.9459 | 0.9461 |
| 0.0 | 53.0 | 1007 | 0.5628 | 0.9459 | 0.9461 |
| 0.0 | 54.0 | 1026 | 0.5629 | 0.9479 | 0.9479 |
| 0.0 | 55.0 | 1045 | 0.5639 | 0.9479 | 0.9479 |
| 0.0 | 56.0 | 1064 | 0.5652 | 0.9459 | 0.9461 |
| 0.0 | 57.0 | 1083 | 0.5658 | 0.9459 | 0.9461 |
| 0.0 | 58.0 | 1102 | 0.5664 | 0.9459 | 0.9461 |
| 0.0 | 59.0 | 1121 | 0.5472 | 0.9498 | 0.9498 |
| 0.0 | 60.0 | 1140 | 0.5428 | 0.9517 | 0.9517 |
| 0.0 | 61.0 | 1159 | 0.5433 | 0.9517 | 0.9517 |
| 0.0 | 62.0 | 1178 | 0.5452 | 0.9517 | 0.9517 |
| 0.0 | 63.0 | 1197 | 0.5473 | 0.9517 | 0.9517 |
| 0.0 | 64.0 | 1216 | 0.5481 | 0.9517 | 0.9517 |
| 0.0 | 65.0 | 1235 | 0.5488 | 0.9517 | 0.9517 |
| 0.0 | 66.0 | 1254 | 0.5494 | 0.9517 | 0.9517 |
| 0.0 | 67.0 | 1273 | 0.5499 | 0.9517 | 0.9517 |
| 0.0 | 68.0 | 1292 | 0.5504 | 0.9517 | 0.9517 |
| 0.0 | 69.0 | 1311 | 0.5509 | 0.9517 | 0.9517 |
| 0.0 | 70.0 | 1330 | 0.5514 | 0.9517 | 0.9517 |
| 0.0 | 71.0 | 1349 | 0.5519 | 0.9517 | 0.9517 |
| 0.0 | 72.0 | 1368 | 0.5535 | 0.9517 | 0.9517 |
| 0.0 | 73.0 | 1387 | 0.5546 | 0.9517 | 0.9517 |
| 0.0 | 74.0 | 1406 | 0.5551 | 0.9517 | 0.9517 |
| 0.0 | 75.0 | 1425 | 0.5555 | 0.9517 | 0.9517 |
| 0.0 | 76.0 | 1444 | 0.5551 | 0.9517 | 0.9517 |
| 0.0 | 77.0 | 1463 | 0.5549 | 0.9517 | 0.9517 |
| 0.0 | 78.0 | 1482 | 0.5551 | 0.9517 | 0.9517 |
| 0.0 | 79.0 | 1501 | 0.5617 | 0.9479 | 0.9479 |
| 0.0 | 80.0 | 1520 | 0.5647 | 0.9459 | 0.9459 |
| 0.0026 | 81.0 | 1539 | 0.5970 | 0.9402 | 0.9404 |
| 0.0005 | 82.0 | 1558 | 0.5256 | 0.9459 | 0.9455 |
| 0.0005 | 83.0 | 1577 | 0.5474 | 0.9479 | 0.9478 |
| 0.0006 | 84.0 | 1596 | 0.6191 | 0.9363 | 0.9369 |
| 0.0 | 85.0 | 1615 | 0.6396 | 0.9324 | 0.9332 |
| 0.0 | 86.0 | 1634 | 0.6396 | 0.9324 | 0.9332 |
| 0.0 | 87.0 | 1653 | 0.5488 | 0.9479 | 0.9479 |
| 0.0008 | 88.0 | 1672 | 0.5376 | 0.9479 | 0.9476 |
| 0.0 | 89.0 | 1691 | 0.5383 | 0.9479 | 0.9476 |
| 0.0 | 90.0 | 1710 | 0.5384 | 0.9479 | 0.9476 |
| 0.0 | 91.0 | 1729 | 0.5384 | 0.9479 | 0.9476 |
| 0.0 | 92.0 | 1748 | 0.5385 | 0.9479 | 0.9476 |
| 0.0 | 93.0 | 1767 | 0.5385 | 0.9479 | 0.9476 |
| 0.0009 | 94.0 | 1786 | 0.5523 | 0.9440 | 0.9438 |
| 0.0 | 95.0 | 1805 | 0.5566 | 0.9440 | 0.9438 |
| 0.0 | 96.0 | 1824 | 0.5570 | 0.9440 | 0.9438 |
| 0.0 | 97.0 | 1843 | 0.5570 | 0.9440 | 0.9438 |
| 0.0 | 98.0 | 1862 | 0.5554 | 0.9440 | 0.9438 |
| 0.0 | 99.0 | 1881 | 0.5533 | 0.9440 | 0.9438 |
| 0.0 | 100.0 | 1900 | 0.5533 | 0.9440 | 0.9438 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.1+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
| 8,865 |
HiTZ/A2T_RoBERTa_SMFA_ACE-arg_WikiEvents-arg | [
"contradiction",
"entailment",
"neutral"
] | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` | 3,612 |
CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_42 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_66 | [
"0",
"1"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_66 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_77 | [
"0",
"1"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_77 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_77 | [
"0",
"1",
"2",
"3",
"4"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_88 | [
"0",
"1"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_88 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_99 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CogComp/ZeroShotWiki | null | ---
license: apache-2.0
---
# Model description
A BertForSequenceClassification model that is finetuned on Wikipedia for zero-shot text classification. For details, see our NAACL'22 paper.
# Usage
Concatenate the text sentence with each of the candidate labels as input to the model. The model will output a score for each label. Below is an example.
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("CogComp/ZeroShotWiki")
model = AutoModelForSequenceClassification.from_pretrained("CogComp/ZeroShotWiki")
labels = ["sports", "business", "politics"]
texts = ["As of the 2018 FIFA World Cup, twenty-one final tournaments have been held and a total of 79 national teams have competed."]
with torch.no_grad():
for text in texts:
label_score = {}
for label in labels:
inputs = tokenizer(text, label, return_tensors='pt')
out = model(**inputs)
label_score[label]=float(torch.nn.functional.softmax(out[0], dim=-1)[0][0])
print(label_score) # Predict the label with the highest score
``` | 1,141 |
G-WOO/model_150mil-CodeBERTa-small-v1 | null | Entry not found | 15 |
Shikenrua/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,083 |
annahaz/xlm-roberta-base-finetuned-misogyny-en-it-hi-beng | [
"0",
"1"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-finetuned-misogyny-en-it-hi-beng
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-misogyny-en-it-hi-beng
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0140
- Accuracy: 0.9970
- F1: 0.9969
- Precision: 0.9937
- Recall: 1.0
- Mae: 0.0030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3131 | 1.0 | 1759 | 0.4655 | 0.7820 | 0.7682 | 0.7855 | 0.7516 | 0.2180 |
| 0.2644 | 2.0 | 3518 | 0.3231 | 0.8619 | 0.8665 | 0.8091 | 0.9326 | 0.1381 |
| 0.2408 | 3.0 | 5277 | 0.3515 | 0.8801 | 0.8877 | 0.8071 | 0.9863 | 0.1199 |
| 0.1927 | 4.0 | 7036 | 0.1428 | 0.9514 | 0.9512 | 0.9194 | 0.9853 | 0.0486 |
| 0.1333 | 5.0 | 8795 | 0.1186 | 0.9712 | 0.9707 | 0.9478 | 0.9947 | 0.0288 |
| 0.1163 | 6.0 | 10554 | 0.0546 | 0.9879 | 0.9875 | 0.9803 | 0.9947 | 0.0121 |
| 0.0854 | 7.0 | 12313 | 0.0412 | 0.9899 | 0.9896 | 0.9804 | 0.9989 | 0.0101 |
| 0.086 | 8.0 | 14072 | 0.0252 | 0.9949 | 0.9948 | 0.9896 | 1.0 | 0.0051 |
| 0.0395 | 9.0 | 15831 | 0.0179 | 0.9965 | 0.9963 | 0.9927 | 1.0 | 0.0035 |
| 0.0343 | 10.0 | 17590 | 0.0140 | 0.9970 | 0.9969 | 0.9937 | 1.0 | 0.0030 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| 2,506 |
SushantGautam/LogClassification | [
"0",
"1"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LogClassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LogClassification
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,032 |
Cheatham/xlm-roberta-large-finetuned4 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Maha/OGBV-gender-twtrobertabase-en-trac1 | null | Entry not found | 15 |
Mustang/BERT_responsible_AI | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: eupl-1.1
---
## BERT model van het project Explainable AI | 73 |
NDugar/v3large-1epoch | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` | 4,788 |
anirudh21/xlnet-base-cased-finetuned-rte | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: xlnet-base-cased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6895306859205776
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-finetuned-rte
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0656
- Accuracy: 0.6895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.7007 | 0.4874 |
| No log | 2.0 | 312 | 0.6289 | 0.6751 |
| No log | 3.0 | 468 | 0.7020 | 0.6606 |
| 0.6146 | 4.0 | 624 | 1.0573 | 0.6570 |
| 0.6146 | 5.0 | 780 | 1.0656 | 0.6895 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,830 |
astarostap/distilbert-cased-antisemitic-tweets | null | ---
license: mit
widget:
- text: "Jews run the world."
---
This model takes a tweet with the word "jew" in it, and determines if it's antisemitic.
*Training data:*
This model was trained on 4k tweets, where ~50% were labeled as antisemitic.
I labeled them myself based on personal experience and knowledge about common antisemitic tropes.
*Note:*
The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts.
Please keep in mind that I'm not an expert on antisemitism or hatespeech.
Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech.
If you would like to collaborate on antisemitism detection, please feel free to contact me at starosta@alumni.stanford.edu
This model is not ready for production, it needs more evaluation and more training data.
| 986 |
boychaboy/MNLI_bert-base-cased | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
mervenoyan/PubMedBERT-QNLI | null |
# PubMedBERT Abstract + Full Text Fine-Tuned on QNLI Task
Use case: You can use it to search through a document for a given question, to see if your question is answered in that document.
LABEL0 is "not entailment" meaning your question is not answered by the context and LABEL1 is "entailment" meaning your question is answered.
> Example input: [CLS] Your question [SEP] The context to be searched in [SEP]
Link to the original model: https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
Credits to the paper:
> @misc{pubmedbert, author = {Yu Gu and Robert Tinn and Hao Cheng and
> Michael Lucas and Naoto Usuyama and Xiaodong Liu and Tristan Naumann
> and Jianfeng Gao and Hoifung Poon}, title = {Domain-Specific
> Language Model Pretraining for Biomedical Natural Language
> Processing}, year = {2020}, eprint = {arXiv:2007.15779}, }
| 888 |
prajjwal1/bert-small-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI.
If you use the model, please consider citing the paper
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
```
MNLI: 72.1%
MNLI-mm: 73.76%
```
These models were trained for 4 epochs.
[@prajjwal_1](https://twitter.com/prajjwal_1)
| 994 |
yoelvis/topical-segmentation-sensitive | null | Entry not found | 15 |
ikram54/autotrain-harassement-675420038 | [
"Indirect Harassment",
"Not Hate",
"Not Sexist",
"Physical Harassment",
"Sexual Harassment"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ikram54/autotrain-data-harassement
co2_eq_emissions: 2.6332836871905054
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 675420038
- CO2 Emissions (in grams): 2.6332836871905054
## Validation Metrics
- Loss: 0.8747465014457703
- Accuracy: 0.7085201793721974
- Macro F1: 0.579743989078862
- Micro F1: 0.7085201793721974
- Weighted F1: 0.6913786522271296
- Macro Precision: 0.5669375905888698
- Micro Precision: 0.7085201793721974
- Weighted Precision: 0.6760144007300164
- Macro Recall: 0.5940655209452201
- Micro Recall: 0.7085201793721974
- Weighted Recall: 0.7085201793721974
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ikram54/autotrain-harassement-675420038
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ikram54/autotrain-harassement-675420038", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ikram54/autotrain-harassement-675420038", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,396 |
Hieu/nft_label | null | Entry not found | 15 |
MartinoMensio/racism-models-raw-label-epoch-3 | null | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `raw-label-epoch-3`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'raw-label-epoch-3'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.8621180653572083}, {'label': 'non-racist', 'score': 0.9725497364997864}]
```
For more details, see https://github.com/preyero/neatclass22
| 4,252 |
Souvikcmsa/Roberta_Sentiment_Analysis | [
"0",
"4"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Souvikcmsa/autotrain-data-sentimentAnalysis_By_Souvik
co2_eq_emissions: 4.453029772491864
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 762623422
- CO2 Emissions (in grams): 4.453029772491864
## Validation Metrics
- Loss: 0.40843138098716736
- Accuracy: 0.8302828618968386
- Macro F1: 0.8302447939743022
- Micro F1: 0.8302828618968385
- Weighted F1: 0.8302151855901072
- Macro Precision: 0.8310980209442669
- Micro Precision: 0.8302828618968386
- Weighted Precision: 0.8313262654775467
- Macro Recall: 0.8305699539252172
- Micro Recall: 0.8302828618968386
- Weighted Recall: 0.8302828618968386
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Souvikcmsa/autotrain-sentimentAnalysis_By_Souvik-762623422
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Souvikcmsa/autotrain-sentimentAnalysis_By_Souvik-762623422", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Souvikcmsa/autotrain-sentimentAnalysis_By_Souvik-762623422", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,471 |
Intel/bart-large-mrpc-int8-dynamic | [
"0",
"1"
] | ---
language:
- en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingDynamic
datasets:
- glue
metrics:
- f1
model-index:
- name: bart-large-mrpc-int8-static
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.9050847457627118
---
# INT8 bart-large-mrpc
### Post-training dynamic quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [bart-large-mrpc](https://huggingface.co/Intel/bart-large-mrpc).
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9051|0.9120|
| **Model size (MB)** |547|1556.48|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/bart-large-mrpc-int8-dynamic',
)
```
| 1,084 |
Hate-speech-CNERG/bengali-abusive-MuRIL | null | ---
language: [bn]
license: afl-3.0
---
This model is used detecting **abusive speech** in **Bengali**. It is finetuned on MuRIL model using bengali abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ | 963 |
dbb/gbert-large-jobad-classification-34 | [
"administration/sekretariat",
"arzt",
"baugewerbe/-ingenieur",
"beschaffung/supply chain",
"bildung/soziales",
"chem. pharm. ausbildung",
"chem. pharm. beruf",
"controlling/finanzen",
"gastro. touri. ausbildung",
"gastro./tourismus",
"hausverw./-bewirt.",
"hr/recruiting",
"indust. konstruk./ingenieur",
"indust. produktion",
"it",
"it ausbildung",
"it studium",
"kaufm. ausbildung",
"kaufm. studium",
"labor/forschung",
"log. ausbildung",
"logistik/transport",
"marketing/kommunikation",
"mechaniker/techniker/elektriker",
"med. ausbildung",
"med. tech. beruf",
"med. verwaltung",
"pflege/therapie",
"quali. kontr./-management",
"recht/justiz",
"rettungsdienst/sicherheit",
"tech. ausbildung",
"tech. studium",
"vertrieb/kundenbetreuung"
] | ---
language: de
tags:
- bert
- recruiting
---
# G(erman)BERT Large Fine-Tuned for Job Ad Classification

| 228 |
TehranNLP-org/electra-base-sst2 | [
"negative",
"positive"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: SST2
type: ''
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9506880733944955
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1754
- Accuracy: 0.9507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: not_parallel
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2105 | 0.2056 | 0.9358 |
| 0.2549 | 2.0 | 4210 | 0.1850 | 0.9438 |
| 0.1162 | 3.0 | 6315 | 0.1754 | 0.9507 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
| 1,766 |
CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_99 | [
"0",
"1"
] | Entry not found | 15 |
danlupu/sentiment-analysis | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: sentiment-analysis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8657718120805369
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3124
- Accuracy: 0.8667
- F1: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,479 |
tinkoff-ai/response-quality-classifier-tiny | [
"relevance",
"specificity"
] | ---
license: mit
widget:
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]супер, вот только проснулся, у тебя как?"
example_title: "Dialog example 1"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм"
example_title: "Dialog example 2"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?"
example_title: "Dialog example 3"
language:
- ru
tags:
- conversational
---
This classification model is based on [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2).
The model should be used to produce relevance and specificity of the last message in the context of a dialogue.
The labels explanation:
- `relevance`: is the last message in the dialogue relevant in the context of the full dialogue.
- `specificity`: is the last message in the dialogue interesting and promotes the continuation of the dialogue.
It is pretrained on a large corpus of dialog data in unsupervised manner: the model is trained to predict whether last response was in a real dialog, or it was pulled from some other dialog at random.
Then it was finetuned on manually labelled examples (dataset will be posted soon).
The model was trained with three messages in the context and one response. Each message was tokenized separately with ``` max_length = 32 ```.
The performance of the model on validation split (dataset will be posted soon) (with the best thresholds for validation samples):
| | threshold | f0.5 | ROC AUC |
|:------------|------------:|-------:|----------:|
| relevance | 0.51 | 0.82 | 0.74 |
| specificity | 0.54 | 0.81 | 0.8 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-quality-classifier-tiny')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-quality-classifier-tiny')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.sigmoid(logits)[0].cpu().detach().numpy()
relevance, specificity = probas
```
The [app](https://huggingface.co/spaces/tinkoff-ai/response-quality-classifiers) where you can easily interact with this model.
The work was done during internship at Tinkoff by [egoriyaa](https://github.com/egoriyaa), mentored by [solemn-leader](https://huggingface.co/solemn-leader). | 2,569 |
anchit48/fine-tuned-sentiment-analysis-customer-feedback | [
"NEGATIVE",
"POSITIVE"
] | Entry not found | 15 |
obokkkk/kc-bert_finetuned_unsmile | [
"clean",
"기타 혐오",
"남성",
"성소수자",
"악플/욕설",
"여성/가족",
"연령",
"인종/국적",
"종교",
"지역"
] | ---
tags:
- generated_from_trainer
model-index:
- name: kc-bert_finetuned_unsmile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kc-bert_finetuned_unsmile
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1326
- Lrap: 0.8753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Lrap |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 235 | 0.1458 | 0.8612 |
| No log | 2.0 | 470 | 0.1280 | 0.8738 |
| 0.1685 | 3.0 | 705 | 0.1257 | 0.8791 |
| 0.1685 | 4.0 | 940 | 0.1281 | 0.8777 |
| 0.0774 | 5.0 | 1175 | 0.1326 | 0.8753 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
| 1,533 |
waboucay/camembert-large-finetuned-repnum_wl-rua_wl_3_classes | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 77.3 | 77.3 |
| test | 78.0 | 77.9 | | 367 |
bousejin/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.925169929474641
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8419 | 1.0 | 250 | 0.3236 | 0.9025 | 0.8999 |
| 0.258 | 2.0 | 500 | 0.2202 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,803 |
itzo/bert-base-uncased-fine-tuned-on-clinc_oos-dataset | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declined",
"carry_on",
"change_accent",
"change_ai_name",
"change_language",
"change_speed",
"change_user_name",
"change_volume",
"confirm_reservation",
"cook_time",
"credit_limit",
"credit_limit_change",
"credit_score",
"current_location",
"damaged_card",
"date",
"definition",
"direct_deposit",
"directions",
"distance",
"do_you_have_pets",
"exchange_rate",
"expiration_date",
"find_phone",
"flight_status",
"flip_coin",
"food_last",
"freeze_account",
"fun_fact",
"gas",
"gas_type",
"goodbye",
"greeting",
"how_busy",
"how_old_are_you",
"improve_credit_score",
"income",
"ingredient_substitution",
"ingredients_list",
"insurance",
"insurance_change",
"interest_rate",
"international_fees",
"international_visa",
"jump_start",
"last_maintenance",
"lost_luggage",
"make_call",
"maybe",
"meal_suggestion",
"meaning_of_life",
"measurement_conversion",
"meeting_schedule",
"min_payment",
"mpg",
"new_card",
"next_holiday",
"next_song",
"no",
"nutrition_info",
"oil_change_how",
"oil_change_when",
"oos",
"order",
"order_checks",
"order_status",
"pay_bill",
"payday",
"pin_change",
"play_music",
"plug_type",
"pto_balance",
"pto_request",
"pto_request_status",
"pto_used",
"recipe",
"redeem_rewards",
"reminder",
"reminder_update",
"repeat",
"replacement_card_duration",
"report_fraud",
"report_lost_card",
"reset_settings",
"restaurant_reservation",
"restaurant_reviews",
"restaurant_suggestion",
"rewards_balance",
"roll_dice",
"rollover_401k",
"routing",
"schedule_maintenance",
"schedule_meeting",
"share_location",
"shopping_list",
"shopping_list_update",
"smart_home",
"spelling",
"spending_history",
"sync_device",
"taxes",
"tell_joke",
"text",
"thank_you",
"time",
"timer",
"timezone",
"tire_change",
"tire_pressure",
"todo_list",
"todo_list_update",
"traffic",
"transactions",
"transfer",
"translate",
"travel_alert",
"travel_notification",
"travel_suggestion",
"uber",
"update_playlist",
"user_name",
"vaccines",
"w2",
"weather",
"what_are_your_hobbies",
"what_can_i_ask_you",
"what_is_your_name",
"what_song",
"where_are_you_from",
"whisper_mode",
"who_do_you_work_for",
"who_made_you",
"yes"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
model-index:
- name: bert-base-uncased-fine-tuned-on-clinc_oos-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-fine-tuned-on-clinc_oos-dataset
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2811
- Accuracy Score: 0.9239
- F1 Score: 0.9213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:--------:|
| 4.4271 | 1.0 | 239 | 3.5773 | 0.6116 | 0.5732 |
| 3.0415 | 2.0 | 478 | 2.4076 | 0.8390 | 0.8241 |
| 2.1182 | 3.0 | 717 | 1.7324 | 0.8994 | 0.8934 |
| 1.5897 | 4.0 | 956 | 1.3863 | 0.9210 | 0.9171 |
| 1.3458 | 5.0 | 1195 | 1.2811 | 0.9239 | 0.9213 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,789 |
Team-PIXEL/pixel-base-finetuned-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-mnli
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE MNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 15000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,135 |
Tomas23/twitter-roberta-base-mar2022-finetuned-emotion | [
"anger",
"joy",
"optimism",
"sadness"
] | ---
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: twitter-roberta-base-mar2022-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.8191414496833216
- name: F1
type: f1
value: 0.8170974933422602
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-mar2022-finetuned-emotion
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-mar2022](https://huggingface.co/cardiffnlp/twitter-roberta-base-mar2022) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5146
- Accuracy: 0.8191
- F1: 0.8171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8945 | 1.0 | 102 | 0.5831 | 0.7995 | 0.7887 |
| 0.5176 | 2.0 | 204 | 0.5266 | 0.8235 | 0.8200 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,852 |
CenIA/bert-base-spanish-wwm-uncased-finetuned-pawsx | null | Entry not found | 15 |
CouchCat/ma_sa_v7_distil | [
"negative",
"neutral",
"positive"
] | ---
language: en
license: mit
tags:
- sentiment-analysis
widget:
- text: "I am disappointed in the terrible quality of my dress"
---
### Description
A Sentiment Analysis model trained on customer feedback data using DistilBert.
Possible sentiments are:
* negative
* neutral
* positive
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_sa_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_sa_v7_distil")
``` | 547 |
ItcastAI/bert_finetunning_test | null | Entry not found | 15 |
Vaibhavbrkn/grammer_classiffication | null | Entry not found | 15 |
airKlizz/xlm-roberta-base-germeval21-toxic-with-task-specific-pretraining-and-data-augmentation | null | Entry not found | 15 |
arjuntheprogrammer/distilbert-base-multilingual-cased-sentiment-2 | [
"negative",
"neutral",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-sentiment-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.7614
- name: F1
type: f1
value: 0.7614
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiment-2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5882
- Accuracy: 0.7614
- F1: 0.7614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,777 |
blackbird/bert-base-uncased-MNLI-v1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | BERT based model finetuned on MNLI with our custom training routine.
Yields 60% accuraqcy on adversarial HANS dataset. | 118 |
mnaylor/bioclinical-bert-finetuned-mtsamples | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | # BioClinical BERT Fine-tuned on MTSamples
This model is simply [Alsentzer's Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) fine-tuned on the MTSamples dataset, with a classification task defined in [this repo](https://github.com/socd06/medical-nlp). | 277 |
sismetanin/rubert_conversational-ru-sentiment-rureviews | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## RuBERT-Conversational-ru-sentiment-RuReviews
RuBERT-Conversational-ru-sentiment-RuReviews is a [RuBERT-Conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
``` | 6,400 |
textattack/albert-base-v2-rotten-tomatoes | null | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8808630393996247, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 630 |
tr3cks/3LabelsSentimentAnalysisSpanish | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
ctu-aic/xlm-roberta-large-squad2-ctkfacts | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: cc-by-sa-3.0
---
| 33 |
davidmasip/racism | null | ---
license: cc
language: es
widget:
- text: "Me cae muy bien."
example_title: "Non-racist example"
- text: "Unos menas agreden a una mujer."
example_title: "Racist example"
---
Model to predict whether a given text is racist or not:
* `LABEL_0` output indicates non-racist text
* `LABEL_1` output indicates racist text
Usage:
```python
from transformers import pipeline
RACISM_MODEL = "davidmasip/racism"
racism_analysis_pipe = pipeline("text-classification",
model=RACISM_MODEL, tokenizer=RACISM_MODEL)
results = racism_analysis_pipe("Unos menas agreden a una mujer.")
def clean_labels(results):
for result in results:
label = "Non-racist" if results["label"] == "LABEL_0" else "Racist"
result["label"] = label
clean_labels(results)
print(results)
``` | 856 |
hackathon-pln-es/jurisbert-class-tratados-internacionales-sistema-universal | [
"Convención Internacional sobre la Protección de los Derechos de todos los Trabajadores Migratorios y de sus Familias",
"Convención de los Derechos del Niño",
"Convención sobre la Eliminación de todas las formas de Discriminación contra la Mujer",
"Pacto Internacional de Derechos Civiles y Políticos",
"Convención Internacional Sobre la Eliminación de Todas las Formas de Discriminación Racial",
"Convención contra la Tortura y otros Tratos o Penas Crueles, Inhumanos o Degradantes",
"Convención sobre los Derechos de las Personas con Discapacidad",
"Pacto Internacional de Derechos Económicos, Sociales y Culturales"
] | ---
license: cc-by-nc-4.0
language: es
widget:
- text: "A los 4 Civiles de Rosarito se les acusó de cometer varios delitos federales en flagrancia, aunque se ha comprobado que no fueron detenidos en el lugar en el que los militares señalaron en su parte informativo. Las cuatro personas refieren que el 17 de junio de 2009 fueron obligados a firmar sus declaraciones ante el Ministerio Público mediante torturas y con los ojos vendados. A pesar de haberlos visto severamente golpeados, el agente del Ministerio Público determinó que debían seguir bajo custodia militar."
---
## Descripción del modelo
jurisbert-class-tratados-internacionales-sistema-unviersal es un modelo de clasificación de texto entrenado en un corpus de datos en español de manera supervisada.
Este modelo fue entrenado con Jurisbert un modelo de enmascaramiento preentrenado con un corpus jurídico en español.
Por lo tanto, nuestro jurisbert-class-tratados-internacionales-sistema-unviersal toma el texto que se le está dando y predice cuál de las 8 convenciones de la ONU tiene más que ver con tu texto en español:
1) Convención Internacional sobre la Protección de los Derechos de todos los Trabajadores Migratorios y de sus Familias
2) Convención de los Derechos del Niño
3) Convención sobre la Eliminación de todas las formas de Discriminación contra la Mujer
4) Pacto Internacional de Derechos Civiles y Políticos
5) Convención Internacional Sobre la Eliminación de Todas las Formas de Discriminación Racial
6) Convención contra la Tortura y otros Tratos o Penas Crueles, Inhumanos o Degradantes
7) Convención sobre los Derechos de las Personas con Discapacidad
8) Pacto Internacional de Derechos Económicos, Sociales y Culturales
## Usos previstos y limitaciones
Puede usar el modelo para obtener los artículos de la ONU que tengan más relación al texto que está introduciendo.
Tenga en cuenta que este modelo está destinado principalmente a ajustarse en tareas de clasificación, cuando quiera obtener principalmente que artículos tienen mayor relación a su tema en cuestión.
## Cómo utilizar
```python
#Puede usar este modelo directamente con SimpleTransformers :
#Para instalar SimpleTransformers:
pip install simpletransformers
from simpletransformers.classification import ClassificationModel
# Creando un ClassificationModel
model = ClassificationModel(
"roberta", "hackathon-pln-es/jurisbert-class-tratados-internacionales-sistema-unviersal", use_cuda=True
)
predecir = ["adoptar a un niño"]
predictions, raw_outputs = model.predict(predecir)
predictions
```
## Datos de entrenamiento
El modelo jurisbert-class-tratados-internacionales-sistema-unviersal se entrenó previamente en un conjunto de datos que consta de 3,799 textos con su etiquetado a diferentes 8 tipos de convenios.
## Procedimiento de entrenamiento
Los textos se transforman utilizando SimpleTransformers en el que se entrenó tres épocas con modelo base Roberta y modelo especifico Jurisbert el cual es un modelo de enmascaramiento con corpus jurídico en español.
## Variables y métricas
Para entrenar se usaron el 90% (3,799) de nuestros datos, al hacer la evaluación:
Train: 3419
Test: 380
## Resultados de evaluación
| | precision | recall | f1-score | support |
|---|---|---|---|---|
| accuracy | | |0.91 | 380 |
| macro avg | 0.92 |0.91 |0.91 | 380 |
| weighted avg | 0.91 | 0.91 |0.91 | 380 |
Accuracy: 0.9105
## Equipo
El equipo esta conformado por @gpalomeque @aureliopvs @cecilimacias @giomadariaga @cattsytabla | 3,531 |
dhlee347/distilbert-imdb | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9302
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1796
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2808 | 1.0 | 782 | 0.1796 | 0.9302 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.6
| 1,562 |
Hate-speech-CNERG/hindi-abusive-MuRIL | null | ---
language: [hi]
license: afl-3.0
---
This model is used detecting **abusive speech** in **Devanagari Hindi**. It is finetuned on MuRIL model using Hindi abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ | 970 |
arize-ai/distilbert_reviews_with_language_drift | [
"NEGATIVE",
"NEUTRAL",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ecommerce_reviews_with_language_drift
metrics:
- accuracy
- f1
model-index:
- name: distilbert_reviews_with_language_drift
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ecommerce_reviews_with_language_drift
type: ecommerce_reviews_with_language_drift
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.818
- name: F1
type: f1
value: 0.8167126877417763
widget:
- text: "Poor quality of fabric and ridiculously tight at chest. It's way too short."
example_title: "Negative"
- text: "One worked perfectly, but the other one has a slight leak and we end up with water underneath the filter."
example_title: "Neutral"
- text: "I liked the price most! Nothing to dislike here!"
example_title: "Positive"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_reviews_with_language_drift
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ecommerce_reviews_with_language_drift dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4970
- Accuracy: 0.818
- F1: 0.8167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.593 | 1.0 | 500 | 0.4723 | 0.799 | 0.7976 |
| 0.3714 | 2.0 | 1000 | 0.4679 | 0.818 | 0.8177 |
| 0.2652 | 3.0 | 1500 | 0.4970 | 0.818 | 0.8167 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 2,341 |
Abderrahim2/bert-finetuned-Location | [
"Australia",
"United Kingdom",
"United States"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-Location
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-Location
This model is a fine-tuned version of [dbmdz/bert-base-french-europeana-cased](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5462
- F1: 0.8167
- Roc Auc: 0.8624
- Accuracy: 0.8133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4229 | 1.0 | 742 | 0.3615 | 0.7402 | 0.8014 | 0.6900 |
| 0.3722 | 2.0 | 1484 | 0.3103 | 0.7906 | 0.8416 | 0.7796 |
| 0.262 | 3.0 | 2226 | 0.3364 | 0.8135 | 0.8600 | 0.8100 |
| 0.2239 | 4.0 | 2968 | 0.4593 | 0.8085 | 0.8561 | 0.8066 |
| 0.1461 | 5.0 | 3710 | 0.5534 | 0.7923 | 0.8440 | 0.7904 |
| 0.1333 | 6.0 | 4452 | 0.5462 | 0.8167 | 0.8624 | 0.8133 |
| 0.0667 | 7.0 | 5194 | 0.6298 | 0.7972 | 0.8479 | 0.7958 |
| 0.0616 | 8.0 | 5936 | 0.6362 | 0.8075 | 0.8556 | 0.8059 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 2,033 |
Yarn/distilbert-base-uncased-mnli-finetuned-mnli | [
"CONTRADICTION",
"ENTAILMENT",
"NEUTRAL"
] | Entry not found | 15 |
nikitakotsehub/AirlineDistilBERT | [
"NEGATIVE",
"POSITIVE"
] | Entry not found | 15 |
waboucay/camembert-large-finetuned-repnum_wl-rua_wl | [
"contradiction",
"non-contradiction"
] | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 84.5 | 84.3 |
| test | 85.2 | 85.1 |
| 368 |
RogerKam/roberta_RCADE_fine_tuned_sentiment_covid_news | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta_RCADE_fine_tuned_sentiment_covid_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_RCADE_fine_tuned_sentiment_covid_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1662
- Accuracy: 0.9700
- F1 Score: 0.9700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,196 |
KhawajaAbaid/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
Emirhan/51k-finetuned-bert-model | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
InfoCoV/Senti-Cro-CoV-cseBERT | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Maelstrom77/vibert | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
Wiirin/DistilBERT-finetuned-PubMed-FoodCancer | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
abhishek/autonlp-imdb_sentiment_classification-31154 | [
"0",
"1"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 31154
## Validation Metrics
- Loss: 0.19292379915714264
- Accuracy: 0.9395
- Precision: 0.9569557080474111
- Recall: 0.9204
- AUC: 0.9851040399999998
- F1: 0.9383219492302988
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb_sentiment_classification-31154
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb_sentiment_classification-31154", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb_sentiment_classification-31154", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,052 |
aicast/bert_finetuning_test | null | Entry not found | 15 |
cemdenizsel/51k-finetuned-bert-model | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
daekeun-ml/koelectra-small-v3-nsmc | [
"0",
"1"
] | ---
language:
- ko
tags:
- classification
license: mit
datasets:
- nsmc
metrics:
- accuracy
- f1
- precision
- recall- accuracy
---
# Sentiment Binary Classification (fine-tuning with KoELECTRA-Small-v3 model and Naver Sentiment Movie Corpus dataset)
## Usage (Amazon SageMaker inference applicable)
It uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint.
### inference_nsmc.py
```python
import json
import sys
import logging
import torch
from torch import nn
from transformers import ElectraConfig
from transformers import ElectraModel, AutoTokenizer, ElectraTokenizer, ElectraForSequenceClassification
logging.basicConfig(
level=logging.INFO,
format='[{%(filename)s:%(lineno)d} %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(filename='tmp.log'),
logging.StreamHandler(sys.stdout)
]
)
logger = logging.getLogger(__name__)
max_seq_length = 128
classes = ['Neg', 'Pos']
tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/koelectra-small-v3-nsmc")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def model_fn(model_path=None):
####
# If you have your own trained model
# Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator'
####
#config = ElectraConfig.from_json_file(f'{model_path}/config.json')
#model = ElectraForSequenceClassification.from_pretrained(f'{model_path}/model.pth', config=config)
# Download model from the Huggingface hub
model = ElectraForSequenceClassification.from_pretrained('daekeun-ml/koelectra-small-v3-nsmc')
model.to(device)
return model
def input_fn(input_data, content_type="application/jsonlines"):
data_str = input_data.decode("utf-8")
jsonlines = data_str.split("\n")
transformed_inputs = []
for jsonline in jsonlines:
text = json.loads(jsonline)["text"][0]
logger.info("input text: {}".format(text))
encode_plus_token = tokenizer.encode_plus(
text,
max_length=max_seq_length,
add_special_tokens=True,
return_token_type_ids=False,
padding="max_length",
return_attention_mask=True,
return_tensors="pt",
truncation=True,
)
transformed_inputs.append(encode_plus_token)
return transformed_inputs
def predict_fn(transformed_inputs, model):
predicted_classes = []
for data in transformed_inputs:
data = data.to(device)
output = model(**data)
softmax_fn = nn.Softmax(dim=1)
softmax_output = softmax_fn(output[0])
_, prediction = torch.max(softmax_output, dim=1)
predicted_class_idx = prediction.item()
predicted_class = classes[predicted_class_idx]
score = softmax_output[0][predicted_class_idx]
logger.info("predicted_class: {}".format(predicted_class))
prediction_dict = {}
prediction_dict["predicted_label"] = predicted_class
prediction_dict['score'] = score.cpu().detach().numpy().tolist()
jsonline = json.dumps(prediction_dict)
logger.info("jsonline: {}".format(jsonline))
predicted_classes.append(jsonline)
predicted_classes_jsonlines = "\n".join(predicted_classes)
return predicted_classes_jsonlines
def output_fn(outputs, accept="application/jsonlines"):
return outputs, accept
```
### test.py
```python
>>> from inference_nsmc import model_fn, input_fn, predict_fn, output_fn
>>> with open('samples/nsmc.txt', mode='rb') as file:
>>> model_input_data = file.read()
>>> model = model_fn()
>>> transformed_inputs = input_fn(model_input_data)
>>> predicted_classes_jsonlines = predict_fn(transformed_inputs, model)
>>> model_outputs = output_fn(predicted_classes_jsonlines)
>>> print(model_outputs[0])
[{inference_nsmc.py:47} INFO - input text: 이 영화는 최고의 영화입니다
[{inference_nsmc.py:47} INFO - input text: 최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다
[{inference_nsmc.py:77} INFO - predicted_class: Pos
[{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Pos", "score": 0.9619030952453613}
[{inference_nsmc.py:77} INFO - predicted_class: Neg
[{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Neg", "score": 0.9994170665740967}
{"predicted_label": "Pos", "score": 0.9619030952453613}
{"predicted_label": "Neg", "score": 0.9994170665740967}
```
### Sample data (samples/nsmc.txt)
```
{"text": ["이 영화는 최고의 영화입니다"]}
{"text": ["최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다"]}
```
## References
- KoELECTRA: https://github.com/monologg/KoELECTRA
- Naver Sentiment Movie Corpus Dataset: https://github.com/e9t/nsmc | 4,710 |
gchhablani/fnet-base-finetuned-sst2 | [
"negative",
"positive"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8944954128440367
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-sst2
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4674
- Accuracy: 0.8945
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name sst2 \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-sst2 \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.2956 | 1.0 | 4210 | 0.8819 | 0.3128 |
| 0.1746 | 2.0 | 8420 | 0.8979 | 0.3850 |
| 0.1204 | 3.0 | 12630 | 0.8945 | 0.4674 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| 2,632 |
gurkan08/turkish-product-comment-sentiment-classification | [
"positive",
"negative"
] | Entry not found | 15 |
howey/roberta-large-cola | null | Entry not found | 15 |
jpcorb20/toxic-detector-distilroberta | [
"toxic",
"severe_toxic",
"obscene",
"threat",
"insult",
"identity_hate"
] | # Distilroberta for toxic comment detection
See my GitHub repo [toxic-comment-server](https://github.com/jpcorb20/toxic-comment-server)
The model was trained from [DistilRoberta](https://huggingface.co/distilroberta-base) on [Kaggle Toxic Comments](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) with the BCEWithLogits loss for Multi-Label prediction. Thus, please use the sigmoid activation on the logits (not made to use the softmax output, e.g. like the HF widget).
## Evaluation
F1 scores:
toxic: 0.72
severe_toxic: 0.38
obscene: 0.72
threat: 0.52
insult: 0.69
identity_hate: 0.60
Macro-F1: 0.61 | 678 |
peril10/Pypinion | null | Entry not found | 15 |
pertschuk/albert-base-quora-classifier | null | Entry not found | 15 |
pertschuk/albert-large-intent-v2 | null | Entry not found | 15 |
razent/SciFive-base-PMC | null | ---
language:
- en
tags:
- token-classification
- text-classification
- question-answering
- text2text-generation
- text-generation
datasets:
- pmc/open_access
---
# SciFive PMC Base
## Introduction
Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598)
Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_
## How to use
For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-base-PMC")
model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-base-PMC")
sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ."
text = "ncbi_ner: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
early_stopping=True
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(line)
``` | 1,374 |
textattack/facebook-bart-large-CoLA | null | Entry not found | 15 |
textattack/facebook-bart-large-MNLI | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Anthos23/FS-distilroberta-fine-tuned | [
"negative",
"neutral",
"positive"
] | Entry not found | 15 |
FinScience/FS-distilroberta-fine-tuned | [
"negative",
"neutral",
"positive"
] | ---
language:
- en
---
# FS-distilroberta-fine-tuned
The model was obtained by fine-tuning "mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis" model for sentiment analysis on financial news gathered by FinScience software. It predicts the sentiment of news with one label ("negative", "neutral" or "positive"). At the moment, the models works only in English.
## Training data
The training dataset consists of 2558 titles of news that were manually labelled by FinScience Team using doccano tool. A "neutral" label was assigned to those news for which an agreement was not reached. 70% (1790 news) of such dataset was employed as training set, while 15% (384) as validation set and the remaining 15% as test set. F1-score (macro) was selected as the evaluation metric.
| Set | Number of news | Scope |
| -------- | ----------------- | ----------------- |
| Training | 1790 | Training the model|
| Validation | 384 | Selecting the hyper-parameters |
| Test | 384 | Evaluating the performance|
## Accuracy
The table below summarizes the performance of the models that were tested on the same test set, consisting of 384 held-out titles:
| Language | Accuracy| F1-score (macro) |
| -------- | ---------------------- | ------------------- |
| FS-distilroberta-fine-tuned | 76%| 76%
| 1,346 |
dapang/distilbert-base-uncased-finetuned-toxicity | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-toxicity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-toxicity
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0086
- Accuracy: 0.999
- F1: 0.9990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.589778712669143e-05
- train_batch_size: 400
- eval_batch_size: 400
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 20 | 0.0142 | 0.998 | 0.998 |
| No log | 2.0 | 40 | 0.0112 | 0.997 | 0.9970 |
| No log | 3.0 | 60 | 0.0088 | 0.999 | 0.9990 |
| No log | 4.0 | 80 | 0.0091 | 0.998 | 0.998 |
| No log | 5.0 | 100 | 0.0086 | 0.999 | 0.9990 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.11.0
| 1,735 |
Yah216/Arabic_poem_meter_3 | [
"البسيط",
"الخفيف",
"الدوبيت",
"الرجز",
"الرمل",
"السريع",
"السلسلة",
"الطويل",
"الكامل",
"المتدارك",
"المتقارب",
"المجتث",
"المديد",
"المضارع",
"المقتضب",
"المنسرح",
"المواليا",
"الهزج",
"الوافر",
"شعر التفعيلة",
"شعر حر",
"عامي",
"موشح"
] | ---
---
language: ar
widget:
- text: "قفا نبك من ذِكرى حبيب ومنزلِ بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ"
- text: "سَلو قَلبي غَداةَ سَلا وَثابا لَعَلَّ عَلى الجَمالِ لَهُ عِتابا"
co2_eq_emissions: 404.66986451902227
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- CO2 Emissions (in grams): 404.66986451902227
## Dataset
We used the APCD dataset cited hereafter for pretraining the model. The dataset has been cleaned and only the main text and the meter columns were kept:
```
@Article{Yousef2019LearningMetersArabicEnglish-arxiv,
author = {Yousef, Waleed A. and Ibrahime, Omar M. and Madbouly, Taha M. and Mahmoud,
Moustafa A.},
title = {Learning Meters of Arabic and English Poems With Recurrent Neural Networks: a Step
Forward for Language Understanding and Synthesis},
journal = {arXiv preprint arXiv:1905.05700},
year = 2019,
url = {https://github.com/hci-lab/LearningMetersPoems}
}
```
## Validation Metrics
- Loss: 0.21315555274486542
- Accuracy: 0.9493554089595999
- Macro F1: 0.7537353091512587
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "قفا نبك من ذِكرى حبيب ومنزلِ بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ"}' https://api-inference.huggingface.co/models/Yah216/Arabic_poem_meter_3
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Yah216/Arabic_poem_meter_3", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Yah216/Arabic_poem_meter_3", use_auth_token=True)
inputs = tokenizer("قفا نبك من ذِكرى حبيب ومنزلِ بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ", return_tensors="pt")
outputs = model(**inputs)
``` | 1,876 |
speeqo/distilbert-base-uncased-finetuned-sst-2-english | [
"NEGATIVE",
"POSITIVE"
] | ---
language: en
license: apache-2.0
datasets:
- sst-2
---
# DistilBERT base uncased finetuned SST-2
This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased).
# Fine-tuning hyper-parameters
- learning_rate = 1e-5
- batch_size = 32
- warmup = 600
- max_seq_length = 128
- num_train_epochs = 3.0
# Bias
Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.
For instance, for sentences like `This film was filmed in COUNTRY`, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this [colab](https://colab.research.google.com/gist/ageron/fb2f64fb145b4bc7c49efc97e5f114d3/biasmap.ipynb), [Aurélien Géron](https://twitter.com/aureliengeron) made an interesting map plotting these probabilities for each country.
<img src="https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/resolve/main/map.jpeg" alt="Map of positive probabilities per country." width="500"/>
We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: [WinoBias](https://huggingface.co/datasets/wino_bias), [WinoGender](https://huggingface.co/datasets/super_glue), [Stereoset](https://huggingface.co/datasets/stereoset).
| 1,900 |
titi7242229/roberta-base-bne-finetuned_personality_multi_2 | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned_personality_multi_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned_personality_multi_2
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2983
- Accuracy: 0.5429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3256 | 1.0 | 125 | 2.2642 | 0.2161 |
| 1.815 | 2.0 | 250 | 1.9569 | 0.3919 |
| 1.614 | 3.0 | 375 | 1.7264 | 0.5014 |
| 1.1718 | 4.0 | 500 | 1.6387 | 0.5239 |
| 1.135 | 5.0 | 625 | 1.6259 | 0.5245 |
| 0.5637 | 6.0 | 750 | 1.6443 | 0.5372 |
| 0.3672 | 7.0 | 875 | 1.7146 | 0.5326 |
| 0.3249 | 8.0 | 1000 | 1.8099 | 0.5297 |
| 0.1791 | 9.0 | 1125 | 1.8888 | 0.5285 |
| 0.2175 | 10.0 | 1250 | 1.9228 | 0.5326 |
| 0.0465 | 11.0 | 1375 | 1.9753 | 0.5435 |
| 0.1154 | 12.0 | 1500 | 2.1102 | 0.5256 |
| 0.0745 | 13.0 | 1625 | 2.1319 | 0.5429 |
| 0.0281 | 14.0 | 1750 | 2.1743 | 0.5360 |
| 0.0173 | 15.0 | 1875 | 2.2087 | 0.5441 |
| 0.0269 | 16.0 | 2000 | 2.2456 | 0.5424 |
| 0.0107 | 17.0 | 2125 | 2.2685 | 0.5458 |
| 0.0268 | 18.0 | 2250 | 2.2893 | 0.5383 |
| 0.0245 | 19.0 | 2375 | 2.2943 | 0.5418 |
| 0.0156 | 20.0 | 2500 | 2.2983 | 0.5429 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 2,579 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.