modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
boychaboy/kobias_klue-roberta-small | [
"biased",
"none"
] | Entry not found | 15 |
bsingh/roberta_goEmotion | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_3",
"LABEL_4",
... | ---
language: en
tags:
- text-classification
- pytorch
- roberta
- emotions
datasets:
- go_emotions
license: mit
widget:
- text: "I am not feeling well today."
---
## This model is trained for GoEmotions dataset which contains labeled 58k Reddit comments with 28 emotions
- admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral
## Training details:
- The training script is provided here: https://github.com/bsinghpratap/roberta_train_goEmotion
- Please feel free to start an issue in the repo if you have trouble running the model and I would try to respond as soon as possible.
- The model works well on most of the emotions except: 'desire', 'disgust', 'embarrassment', 'excitement', 'fear', 'grief', 'nervousness', 'pride', 'relief', 'remorse', 'surprise']
- I'll try to fine-tune the model further and update here if RoBERTa achieves a better performance.
- Each text datapoint can have more than 1 label. Most of the training set had 1 label: Counter({1: 36308, 2: 6541, 3: 532, 4: 28, 5: 1}). So currently I just used the first label for each of the datapoint. Not ideal but it does a decent job.
## Model Performance
============================================================<br>
Emotion: admiration<br>
============================================================<br>
GoEmotions Paper: 0.65<br>
RoBERTa: 0.62<br>
Support: 504<br>
============================================================<br>
Emotion: amusement<br>
============================================================<br>
GoEmotions Paper: 0.80<br>
RoBERTa: 0.78<br>
Support: 252<br>
============================================================<br>
Emotion: anger<br>
============================================================<br>
GoEmotions Paper: 0.47<br>
RoBERTa: 0.44<br>
Support: 197<br>
============================================================<br>
Emotion: annoyance<br>
============================================================<br>
GoEmotions Paper: 0.34<br>
RoBERTa: 0.22<br>
Support: 286<br>
============================================================<br>
Emotion: approval<br>
============================================================<br>
GoEmotions Paper: 0.36<br>
RoBERTa: 0.31<br>
Support: 318<br>
============================================================<br>
Emotion: caring<br>
============================================================<br>
GoEmotions Paper: 0.39<br>
RoBERTa: 0.24<br>
Support: 114<br>
============================================================<br>
Emotion: confusion<br>
============================================================<br>
GoEmotions Paper: 0.37<br>
RoBERTa: 0.29<br>
Support: 139<br>
============================================================<br>
Emotion: curiosity<br>
============================================================<br>
GoEmotions Paper: 0.54<br>
RoBERTa: 0.48<br>
Support: 233<br>
============================================================<br>
Emotion: disappointment<br>
============================================================<br>
GoEmotions Paper: 0.28<br>
RoBERTa: 0.18<br>
Support: 127<br>
============================================================<br>
Emotion: disapproval<br>
============================================================<br>
GoEmotions Paper: 0.39<br>
RoBERTa: 0.26<br>
Support: 220<br>
============================================================<br>
Emotion: gratitude<br>
============================================================<br>
GoEmotions Paper: 0.86<br>
RoBERTa: 0.84<br>
Support: 288<br>
============================================================<br>
Emotion: joy<br>
============================================================<br>
GoEmotions Paper: 0.51<br>
RoBERTa: 0.47<br>
Support: 116<br>
============================================================<br>
Emotion: love<br>
============================================================<br>
GoEmotions Paper: 0.78<br>
RoBERTa: 0.68<br>
Support: 169<br>
============================================================<br>
Emotion: neutral<br>
============================================================<br>
GoEmotions Paper: 0.68<br>
RoBERTa: 0.61<br>
Support: 1606<br>
============================================================<br>
Emotion: optimism<br>
============================================================<br>
GoEmotions Paper: 0.51<br>
RoBERTa: 0.52<br>
Support: 120<br>
============================================================<br>
Emotion: realization<br>
============================================================<br>
GoEmotions Paper: 0.21<br>
RoBERTa: 0.15<br>
Support: 109<br>
============================================================<br>
Emotion: sadness<br>
============================================================<br>
GoEmotions Paper: 0.49<br>
RoBERTa: 0.42<br>
Support: 108 | 4,988 |
burmaxwell/Bert_temp | null | Entry not found | 15 |
cemdenizsel/10k-finetuned-bert-model | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
damien-ir/kosentelectra-discriminator-v2-mixed | null | Entry not found | 15 |
fabriceyhc/bert-base-uncased-yahoo_answers_topics | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- yahoo_answers_topics
metrics:
- accuracy
model-index:
- name: bert-base-uncased-yahoo_answers_topics
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yahoo_answers_topics
type: yahoo_answers_topics
args: yahoo_answers_topics
metrics:
- name: Accuracy
type: accuracy
value: 0.7499166666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-yahoo_answers_topics
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the yahoo_answers_topics dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8092
- Accuracy: 0.7499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 86625
- training_steps: 866250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.162 | 0.01 | 2000 | 1.7444 | 0.5681 |
| 1.3126 | 0.02 | 4000 | 1.0081 | 0.7054 |
| 0.9592 | 0.03 | 6000 | 0.9021 | 0.7234 |
| 0.8903 | 0.05 | 8000 | 0.8827 | 0.7276 |
| 0.8685 | 0.06 | 10000 | 0.8540 | 0.7341 |
| 0.8422 | 0.07 | 12000 | 0.8547 | 0.7365 |
| 0.8535 | 0.08 | 14000 | 0.8264 | 0.7372 |
| 0.8178 | 0.09 | 16000 | 0.8331 | 0.7389 |
| 0.8325 | 0.1 | 18000 | 0.8242 | 0.7411 |
| 0.8181 | 0.12 | 20000 | 0.8356 | 0.7437 |
| 0.8171 | 0.13 | 22000 | 0.8090 | 0.7451 |
| 0.8092 | 0.14 | 24000 | 0.8469 | 0.7392 |
| 0.8057 | 0.15 | 26000 | 0.8185 | 0.7478 |
| 0.8085 | 0.16 | 28000 | 0.8090 | 0.7467 |
| 0.8229 | 0.17 | 30000 | 0.8225 | 0.7417 |
| 0.8151 | 0.18 | 32000 | 0.8262 | 0.7419 |
| 0.81 | 0.2 | 34000 | 0.8149 | 0.7383 |
| 0.8073 | 0.21 | 36000 | 0.8225 | 0.7441 |
| 0.816 | 0.22 | 38000 | 0.8037 | 0.744 |
| 0.8217 | 0.23 | 40000 | 0.8409 | 0.743 |
| 0.82 | 0.24 | 42000 | 0.8286 | 0.7385 |
| 0.8101 | 0.25 | 44000 | 0.8282 | 0.7413 |
| 0.8254 | 0.27 | 46000 | 0.8170 | 0.7414 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
| 3,120 |
justin871030/bert-base-uncased-goemotions-group-finetuned | [
"ambiguous",
"negative",
"neutral",
"positive"
] | ---
language: en
tags:
- go-emotion
- text-classification
- pytorch
datasets:
- go_emotions
metrics:
- f1
widget:
- text: "Thanks for giving advice to the people who need it! 👌🙏"
license: mit
---
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of `Macro F1` - 70%
## Tutorial Link
- [GitHub](https://github.com/justin871030/GoEmotions) | 615 |
michaelrglass/albert-base-rci-tabmcq-col | null | Entry not found | 15 |
mrm8488/bert-tiny-finetuned-yahoo_answers_topics | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
mrm8488/deberta-v3-small-finetuned-cola | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
widget:
- text: "They represented seriously to the dean Mary as a genuine linguist."
metrics:
- matthews_correlation
model-index:
- name: deberta-v3-small
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6333205721749096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-v3-small fine-tuned on CoLA
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4051
- Matthews Correlation: 0.6333
## Model description
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
The DeBERTa V3 small model comes with 6 layers and a hidden size of 768. Its total parameter number is 143M since we use a vocabulary containing 128K tokens which introduce 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
## Intended uses & limitations
More information needed
## Training and evaluation data
The Corpus of Linguistic Acceptability (CoLA) in its full form consists of 10657 sentences from 23 linguistics publications, expertly annotated for acceptability (grammaticality) by their original authors. The public version provided here contains 9594 sentences belonging to training and development sets, and excludes 1063 sentences belonging to a held out test set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 535 | 0.4051 | 0.6333 |
| 0.3371 | 2.0 | 1070 | 0.4455 | 0.6531 |
| 0.3371 | 3.0 | 1605 | 0.5755 | 0.6499 |
| 0.1305 | 4.0 | 2140 | 0.7188 | 0.6553 |
| 0.1305 | 5.0 | 2675 | 0.8047 | 0.6700 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 3,577 |
mschwab/va_bert_classification | [
"VA",
"no VA"
] | Fine-tuned bert-base model for binary vossian antonomasia detection on sentence level. | 86 |
rohanrajpal/bert-base-en-hi-codemix-cased | [
"negative",
"neutral",
"positive"
] | ---
language:
- hi
- en
tags:
- es
- en
- codemix
license: "apache-2.0"
datasets:
- SAIL 2017
metrics:
- fscore
- accuracy
- precision
- recall
---
# BERT codemixed base model for Hinglish (cased)
This model was built using [lingualytics](https://github.com/lingualytics/py-lingualytics), an open-source library that supports code-mixed analytics.
## Model description
Input for the model: Any codemixed Hinglish text
Output for the model: Sentiment. (0 - Negative, 1 - Neutral, 2 - Positive)
I took a bert-base-multilingual-cased model from Huggingface and finetuned it on [SAIL 2017](http://www.dasdipankar.com/SAILCodeMixed.html) dataset.
## Eval results
Performance of this model on the dataset
| metric | score |
|------------|----------|
| acc | 0.55873 |
| f1 | 0.558369 |
| acc_and_f1 | 0.558549 |
| precision | 0.558075 |
| recall | 0.55873 |
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import BertTokenizer, BertModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
model = AutoModelForSequenceClassification.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in *TensorFlow*:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
model = TFBertModel.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
#### Preprocessing
Followed standard preprocessing techniques:
- removed digits
- removed punctuation
- removed stopwords
- removed excess whitespace
Here's the snippet
```python
from pathlib import Path
import pandas as pd
from lingualytics.preprocessing import remove_lessthan, remove_punctuation, remove_stopwords
from lingualytics.stopwords import hi_stopwords,en_stopwords
from texthero.preprocessing import remove_digits, remove_whitespace
root = Path('<path-to-data>')
for file in 'test','train','validation':
tochange = root / f'{file}.txt'
df = pd.read_csv(tochange,header=None,sep='\t',names=['text','label'])
df['text'] = df['text'].pipe(remove_digits) \
.pipe(remove_punctuation) \
.pipe(remove_stopwords,stopwords=en_stopwords.union(hi_stopwords)) \
.pipe(remove_whitespace)
df.to_csv(tochange,index=None,header=None,sep='\t')
```
## Training data
The dataset and annotations are not good, but this is the best dataset I could find. I am working on procuring my own dataset and will try to come up with a better model!
## Training procedure
I trained on the dataset on the [bert-base-multilingual-cased model](https://huggingface.co/bert-base-multilingual-cased).
| 3,137 |
rti-international/rota | [
"AGGRAVATED ASSAULT",
"ARMED ROBBERY",
"ARSON",
"ASSAULTING PUBLIC OFFICER",
"AUTO THEFT",
"BLACKMAIL/EXTORTION/INTIMIDATION",
"BRIBERY AND CONFLICT OF INTEREST",
"BURGLARY",
"CHILD ABUSE",
"COCAINE OR CRACK VIOLATION OFFENSE UNSPECIFIED",
"COMMERCIALIZED VICE",
"CONTEMPT OF COURT",
"CONTRIB... | ---
language:
- en
widget:
- text: theft 3
- text: forgery
- text: unlawful possession short-barreled shotgun
- text: criminal trespass 2nd degree
- text: eluding a police vehicle
- text: upcs synthetic narcotic
---
# ROTA
## Rapid Offense Text Autocoder
[](https://huggingface.co/rti-international/rota)
[](https://github.com/RTIInternational/rota)
[](https://doi.org/10.5281/zenodo.4770492)
Criminal justice research often requires conversion of free-text offense descriptions into overall charge categories to aid analysis. For example, the free-text offense of "eluding a police vehicle" would be coded to a charge category of "Obstruction - Law Enforcement". Since free-text offense descriptions aren't standardized and often need to be categorized in large volumes, this can result in a manual and time intensive process for researchers. ROTA is a machine learning model for converting offense text into offense codes.
Currently ROTA predicts the *Charge Category* of a given offense text. A *charge category* is one of the headings for offense codes in the [2009 NCRP Codebook: Appendix F](https://www.icpsr.umich.edu/web/NACJD/studies/30799/datadocumentation#).
The model was trained on [publicly available data](https://web.archive.org/web/20201021001250/https://www.icpsr.umich.edu/web/pages/NACJD/guides/ncrp.html) from a crosswalk containing offenses from all 50 states combined with three additional hand-labeled offense text datasets.
<details>
<summary>Charge Category Example</summary>
<img src="https://i.ibb.co/xLsrzmV/charge-category-example.png" width="500">
</details>
### Data Preprocessing
The input text is standardized through a series of preprocessing steps. The text is first passed through a sequence of 500+ case-insensitive regular expressions that identify common misspellings and abbreviations and expand the text to a more full, correct English text. Some data-specific prefixes and suffixes are then removed from the text -- e.g. some states included a statute as a part of the text. Finally, punctuation (excluding dollar signs) are removed from the input, multiple spaces between words are removed, and the text is lowercased.
## Cross-Validation Performance
This model was evaluated using 3-fold cross validation. Except where noted, numbers presented below are the mean value across the 3 folds.
The model in this repository is trained on all available data. Because of this, you can typically expect production performance to be (unknowably) better than the numbers presented below.
### Overall Metrics
| Metric | Value |
| -------- | ----- |
| Accuracy | 0.934 |
| MCC | 0.931 |
| Metric | precision | recall | f1-score |
| --------- | --------- | ------ | -------- |
| macro avg | 0.811 | 0.786 | 0.794 |
*Note*: These are the average of the values *per fold*, so *macro avg* is the average of the macro average of all categories per fold.
### Per-Category Metrics
| Category | precision | recall | f1-score | support |
| ------------------------------------------------------ | --------- | ------ | -------- | ------- |
| AGGRAVATED ASSAULT | 0.954 | 0.954 | 0.954 | 4085 |
| ARMED ROBBERY | 0.961 | 0.955 | 0.958 | 1021 |
| ARSON | 0.946 | 0.954 | 0.95 | 344 |
| ASSAULTING PUBLIC OFFICER | 0.914 | 0.905 | 0.909 | 588 |
| AUTO THEFT | 0.962 | 0.962 | 0.962 | 1660 |
| BLACKMAIL/EXTORTION/INTIMIDATION | 0.872 | 0.871 | 0.872 | 627 |
| BRIBERY AND CONFLICT OF INTEREST | 0.784 | 0.796 | 0.79 | 216 |
| BURGLARY | 0.979 | 0.981 | 0.98 | 2214 |
| CHILD ABUSE | 0.805 | 0.78 | 0.792 | 139 |
| COCAINE OR CRACK VIOLATION OFFENSE UNSPECIFIED | 0.827 | 0.815 | 0.821 | 47 |
| COMMERCIALIZED VICE | 0.818 | 0.788 | 0.802 | 666 |
| CONTEMPT OF COURT | 0.982 | 0.987 | 0.984 | 2952 |
| CONTRIBUTING TO DELINQUENCY OF A MINOR | 0.544 | 0.333 | 0.392 | 50 |
| CONTROLLED SUBSTANCE - OFFENSE UNSPECIFIED | 0.864 | 0.791 | 0.826 | 280 |
| COUNTERFEITING (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| DESTRUCTION OF PROPERTY | 0.97 | 0.968 | 0.969 | 2560 |
| DRIVING UNDER INFLUENCE - DRUGS | 0.567 | 0.603 | 0.581 | 34 |
| DRIVING UNDER THE INFLUENCE | 0.951 | 0.946 | 0.949 | 2195 |
| DRIVING WHILE INTOXICATED | 0.986 | 0.981 | 0.984 | 2391 |
| DRUG OFFENSES - VIOLATION/DRUG UNSPECIFIED | 0.903 | 0.911 | 0.907 | 3100 |
| DRUNKENNESS/VAGRANCY/DISORDERLY CONDUCT | 0.856 | 0.861 | 0.858 | 380 |
| EMBEZZLEMENT | 0.865 | 0.759 | 0.809 | 100 |
| EMBEZZLEMENT (FEDERAL ONLY) | 0 | 0 | 0 | 1 |
| ESCAPE FROM CUSTODY | 0.988 | 0.991 | 0.989 | 4035 |
| FAMILY RELATED OFFENSES | 0.739 | 0.773 | 0.755 | 442 |
| FELONY - UNSPECIFIED | 0.692 | 0.735 | 0.712 | 122 |
| FLIGHT TO AVOID PROSECUTION | 0.46 | 0.407 | 0.425 | 38 |
| FORCIBLE SODOMY | 0.82 | 0.8 | 0.809 | 76 |
| FORGERY (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| FORGERY/FRAUD | 0.911 | 0.928 | 0.919 | 4687 |
| FRAUD (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| GRAND LARCENY - THEFT OVER $200 | 0.957 | 0.973 | 0.965 | 2412 |
| HABITUAL OFFENDER | 0.742 | 0.627 | 0.679 | 53 |
| HEROIN VIOLATION - OFFENSE UNSPECIFIED | 0.879 | 0.811 | 0.843 | 24 |
| HIT AND RUN DRIVING | 0.922 | 0.94 | 0.931 | 303 |
| HIT/RUN DRIVING - PROPERTY DAMAGE | 0.929 | 0.918 | 0.923 | 362 |
| IMMIGRATION VIOLATIONS | 0.84 | 0.609 | 0.697 | 19 |
| INVASION OF PRIVACY | 0.927 | 0.923 | 0.925 | 1235 |
| JUVENILE OFFENSES | 0.928 | 0.866 | 0.895 | 144 |
| KIDNAPPING | 0.937 | 0.93 | 0.933 | 553 |
| LARCENY/THEFT - VALUE UNKNOWN | 0.955 | 0.945 | 0.95 | 3175 |
| LEWD ACT WITH CHILDREN | 0.775 | 0.85 | 0.811 | 596 |
| LIQUOR LAW VIOLATIONS | 0.741 | 0.768 | 0.755 | 214 |
| MANSLAUGHTER - NON-VEHICULAR | 0.626 | 0.802 | 0.701 | 139 |
| MANSLAUGHTER - VEHICULAR | 0.79 | 0.853 | 0.819 | 117 |
| MARIJUANA/HASHISH VIOLATION - OFFENSE UNSPECIFIED | 0.741 | 0.662 | 0.699 | 62 |
| MISDEMEANOR UNSPECIFIED | 0.63 | 0.243 | 0.347 | 57 |
| MORALS/DECENCY - OFFENSE | 0.774 | 0.764 | 0.769 | 412 |
| MURDER | 0.965 | 0.915 | 0.939 | 621 |
| OBSTRUCTION - LAW ENFORCEMENT | 0.939 | 0.947 | 0.943 | 4220 |
| OFFENSES AGAINST COURTS, LEGISLATURES, AND COMMISSIONS | 0.881 | 0.895 | 0.888 | 1965 |
| PAROLE VIOLATION | 0.97 | 0.953 | 0.962 | 946 |
| PETTY LARCENY - THEFT UNDER $200 | 0.965 | 0.761 | 0.85 | 139 |
| POSSESSION/USE - COCAINE OR CRACK | 0.893 | 0.928 | 0.908 | 68 |
| POSSESSION/USE - DRUG UNSPECIFIED | 0.624 | 0.535 | 0.572 | 189 |
| POSSESSION/USE - HEROIN | 0.884 | 0.852 | 0.866 | 25 |
| POSSESSION/USE - MARIJUANA/HASHISH | 0.977 | 0.97 | 0.973 | 556 |
| POSSESSION/USE - OTHER CONTROLLED SUBSTANCES | 0.975 | 0.965 | 0.97 | 3271 |
| PROBATION VIOLATION | 0.963 | 0.953 | 0.958 | 1158 |
| PROPERTY OFFENSES - OTHER | 0.901 | 0.87 | 0.885 | 446 |
| PUBLIC ORDER OFFENSES - OTHER | 0.7 | 0.721 | 0.71 | 1871 |
| RACKETEERING/EXTORTION (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| RAPE - FORCE | 0.842 | 0.873 | 0.857 | 641 |
| RAPE - STATUTORY - NO FORCE | 0.707 | 0.55 | 0.611 | 140 |
| REGULATORY OFFENSES (FEDERAL ONLY) | 0.847 | 0.567 | 0.674 | 70 |
| RIOTING | 0.784 | 0.605 | 0.68 | 119 |
| SEXUAL ASSAULT - OTHER | 0.836 | 0.836 | 0.836 | 971 |
| SIMPLE ASSAULT | 0.976 | 0.967 | 0.972 | 4577 |
| STOLEN PROPERTY - RECEIVING | 0.959 | 0.957 | 0.958 | 1193 |
| STOLEN PROPERTY - TRAFFICKING | 0.902 | 0.888 | 0.895 | 491 |
| TAX LAW (FEDERAL ONLY) | 0.373 | 0.233 | 0.286 | 30 |
| TRAFFIC OFFENSES - MINOR | 0.974 | 0.977 | 0.976 | 8699 |
| TRAFFICKING - COCAINE OR CRACK | 0.896 | 0.951 | 0.922 | 185 |
| TRAFFICKING - DRUG UNSPECIFIED | 0.709 | 0.795 | 0.749 | 516 |
| TRAFFICKING - HEROIN | 0.871 | 0.92 | 0.894 | 54 |
| TRAFFICKING - OTHER CONTROLLED SUBSTANCES | 0.963 | 0.954 | 0.959 | 2832 |
| TRAFFICKING MARIJUANA/HASHISH | 0.921 | 0.943 | 0.932 | 255 |
| TRESPASSING | 0.974 | 0.98 | 0.977 | 1916 |
| UNARMED ROBBERY | 0.941 | 0.939 | 0.94 | 377 |
| UNAUTHORIZED USE OF VEHICLE | 0.94 | 0.908 | 0.924 | 304 |
| UNSPECIFIED HOMICIDE | 0.61 | 0.554 | 0.577 | 60 |
| VIOLENT OFFENSES - OTHER | 0.827 | 0.817 | 0.822 | 606 |
| VOLUNTARY/NONNEGLIGENT MANSLAUGHTER | 0.619 | 0.513 | 0.542 | 54 |
| WEAPON OFFENSE | 0.943 | 0.949 | 0.946 | 2466 |
*Note: `support` is the average number of observations predicted on per fold, so the total number of observations per class is roughly 3x `support`.*
### Using Confidence Scores
If we interpret the classification probability as a confidence score, we can use it to filter out predictions that the model isn't as confident about. We applied this process in 3-fold cross validation. The numbers presented below indicate how much of the prediction data is retained given a confidence score cutoff of `p`. We present the overall accuracy and MCC metrics as if the model was only evaluated on this subset of confident predictions.
| | cutoff | percent retained | mcc | acc |
| --- | ------ | ---------------- | ----- | ----- |
| 0 | 0.85 | 0.952 | 0.96 | 0.961 |
| 1 | 0.9 | 0.943 | 0.964 | 0.965 |
| 2 | 0.95 | 0.928 | 0.97 | 0.971 |
| 3 | 0.975 | 0.912 | 0.975 | 0.976 |
| 4 | 0.99 | 0.886 | 0.982 | 0.983 |
| 5 | 0.999 | 0.733 | 0.995 | 0.996 | | 12,942 |
tyqiangz/indobert-lite-large-p2-smsa | [
"positive",
"neutral",
"negative"
] | ---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: true
datasets:
- Indo4B
---
# IndoBERT-Lite Large Model (phase2 - uncased) Finetuned on IndoNLU SmSA dataset
Finetuned the IndoBERT-Lite Large Model (phase2 - uncased) model on the IndoNLU SmSA dataset following the procedues stated in the paper [IndoNLU: Benchmark and Resources for Evaluating Indonesian
Natural Language Understanding](https://arxiv.org/pdf/2009.05387.pdf).
## How to use
```python
from transformers import pipeline
classifier = pipeline("text-classification",
model='tyqiangz/indobert-lite-large-p2-smsa',
return_all_scores=True)
text = "Penyakit koronavirus 2019"
prediction = classifier(text)
prediction
"""
Output:
[[{'label': 'positive', 'score': 0.0006000096909701824},
{'label': 'neutral', 'score': 0.01223431620746851},
{'label': 'negative', 'score': 0.987165629863739}]]
"""
```
**Finetuning hyperparameters:**
- learning rate: 2e-5
- batch size: 16
- no. of epochs: 5
- max sequence length: 512
- random seed: 42
**Classes:**
- 0: positive
- 1: neutral
- 2: negative
**Performance metrics on SmSA validation dataset**
- Validation accuracy: 0.94
- Validation F1: 0.91
- Validation Recall: 0.91
- Validation Precision: 0.93
| 1,357 |
vishnun/bert-base-cased-tamil-mix-sentiment | [
"Positive",
"Negative",
"Mixed_feelings",
"unknown_state",
"not-Tamil"
] | # Tamil Mix Sentiment analysis
Model is trained on tamil-mix-sentiment dataset and finetuned with backend as bert-base-cased model
## Inference usage
On the hosted Inference type in the text for which you want to classify.
Eg: Super a iruku bro intha work, vera level mass | 276 |
vocab-transformers/cross_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated | [
"LABEL_0"
] | #cross_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated
This CrossEncoder was trained with MarginMSE loss from the [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_785k_emb_updated](https://hf.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_785k_emb_updated) checkpoint. **Word embedding matrix has been updated during training**.
You can load the model with [sentence-transformers](https://sbert.net):
```python
from sentence_transformers import CrossEncoder
from torch import nn
model = CrossEncoder(model_name, default_activation_function=nn.Identity())
```
Performance on TREC Deep Learning (nDCG@10):
- TREC-DL 19: 71.65
- TREC-DL 20: 73.6
| 691 |
Kiran146/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9227765339978083
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Accuracy: 0.9225
- F1: 0.9228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.84 | 1.0 | 250 | 0.3133 | 0.909 | 0.9070 |
| 0.2459 | 2.0 | 500 | 0.2224 | 0.9225 | 0.9228 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,800 |
jkhan447/sentiment-model-sample | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: sentiment-model-sample
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5280
- Accuracy: 0.9395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| 1,386 |
anjandash/finetuned-bert-java-cmpx | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: mit
---
| 24 |
ndubuisi/pfam_init | [
"PF00001.21",
"PF00002.24",
"PF00003.22",
"PF00004.29",
"PF00005.27",
"PF00006.25",
"PF00007.22",
"PF00008.27",
"PF00009.27",
"PF00010.26",
"PF00011.21",
"PF00012.20",
"PF00013.29",
"PF00014.23",
"PF00015.21",
"PF00016.20",
"PF00017.24",
"PF00018.28",
"PF00019.20",
"PF00020.18"... | Entry not found | 15 |
DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323 | [
"AGAINST",
"FAVOR",
"NEUTRAL"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- DrishtiSharma/autonlp-data-Text-Classification-Catalonia-Independence-AutoNLP
co2_eq_emissions: 3.622203603306694
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 633018323
- CO2 Emissions (in grams): 3.622203603306694
## Validation Metrics
- Loss: 0.681106686592102
- Accuracy: 0.709136109384711
- Macro F1: 0.6987186860138147
- Micro F1: 0.709136109384711
- Weighted F1: 0.7059639788836748
- Macro Precision: 0.7174345617951404
- Micro Precision: 0.709136109384711
- Weighted Precision: 0.712710833401347
- Macro Recall: 0.6912117894374218
- Micro Recall: 0.709136109384711
- Weighted Recall: 0.709136109384711
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,549 |
cambridgeltl/sst_distilbert-base-uncased | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
alexhf90/Clasificacion_sentimientos | [
"Comentario_Negativo",
"Comentario_Positivo"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Clasificacion_sentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clasificacion_sentimientos
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3399
- Accuracy: 0.9428
## Model description
Se entrena un modelo que es capaz de clasificar si es un comentario postivo o negativo.
## Intended uses & limitations
More information needed
## Training and evaluation data
Se entrenó el modelo usando comentarios de peliculas de la página $https://www.filmaffinity.com/es/main.html$
- Estos comentarios estan en la base de datos alojada en Kaggle,
url : https://www.kaggle.com/ricardomoya/criticas-peliculas-filmaffinity-en-espaniol/code
## Training procedure
La variable review_rate se usó para clasificar los comentarios positivos y negativos así:
Positivos: los rating con 8,9,10.
Negativos: Los rating con 3,2,1.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2566 | 1.0 | 901 | 0.5299 | 0.8935 |
| 0.0963 | 2.0 | 1802 | 0.2885 | 0.9383 |
| 0.0133 | 3.0 | 2703 | 0.3546 | 0.9406 |
| 0.0002 | 4.0 | 3604 | 0.3399 | 0.9428 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 2,018 |
Suyogyart/nepali-16-newsgroups-classification | [
"Arts",
"Automobile",
"Bank",
"Crime",
"Diaspora",
"Education",
"Entertainment",
"Health",
"Lifestyle",
"Literature",
"Market",
"Politics",
"Sports",
"Technology",
"Tourism",
"World"
] | ---
license: apache-2.0
language: ne
tags:
- multiclass-classification
- newsgroup
- nepali
---
# Nepali 16 News Group Classification
This model is suitable for classifying news categories in Nepali language into 16 different groups. It is fine-tuned on a pretrained DistilBERT model with Sequence Classification head on 16 newsgroup dataset for Nepali language.
## Acknowledgements
### Pretrained DistilBERT model
This model is fine-tuned on a text classification problem using a [pretrained model](https://huggingface.co/Sakonii/distilbert-base-nepali) available at HuggingFace.
## Dataset
This dataset consists of News documents in Nepali language which are equally categorized into 16 different categories. It is primarily designed for the purpose of multiclass text classification tasks. Each category consists of 1000 different news articles scraped from online Nepali news portals namely Ekantipur, Nagarik, Gorkhapatra, Online Khabar and many more. In addition to the article text, it also contains news headings, source from where the news is taken from and brief summary of what news is about. However, summaries are only available for news from certain sources.
The dataset is Label Encoded, i.e. it consists of 'labels' column that denote the numerical representation of news categories.
## Model Fine-tuning
The model fine-tuning was carried out in Google Colab with Tesla T4 GPU environment using HuggingFace's Trainer API. The training took approx. 42 minutes and was trained for 4 epochs until it reached the validation accuracy threshold.
**Dataset Splits**
| Split | Size | No. of samples |
|------------|------|----------------|
| train | 0.7 | 11200 |
| validation | 0.15 | 2400 |
| test | 0.15 | 2400 |
**DistilBERT Tokenizer parameters**
```
padding = True
truncation = True
max_len = 512
```
**Model Trainer arguments (For Trainer API)**
```
epochs = 5
batch_size = 16
learning_rate = 5e-05
save_steps = 500
eval_steps = 500
```
## Training Results
| Step | Training Loss | Validation Loss | Accuracy | Balanced Accuracy | Precision | Recall | F1 |
|------|---------------|-----------------|----------|-------------------|-----------|----------|----------|
| 500 | 0.718600 | 0.407946 | 0.878750 | 0.878750 | 0.882715 | 0.878750 | 0.877678 |
| 1000 | 0.252300 | 0.372410 | 0.897083 | 0.897083 | 0.903329 | 0.897083 | 0.897369 |
| 1500 | 0.175000 | 0.323519 | 0.916250 | 0.916250 | 0.917955 | 0.916250 | 0.916297 |
| 2000 | 0.099400 | 0.339903 | 0.916667 | 0.916667 | 0.919054 | 0.916667 | 0.916141 |
| 2500 | 0.058900 | 0.354112 | 0.921250 | 0.921250 | 0.922036 | 0.921250 | 0.920899 |
| 3000 | 0.023300 | 0.360163 | 0.922500 | 0.922500 | 0.922767 | 0.922500 | 0.922219 |
**Validation Loss:** 0.3235
**Validation Accuracy:** 92.625%
## Testing Results
| category | precision | recall | f1-score | support |
|---------------|-----------|--------|----------|---------|
| Arts | 0.94 | 0.97 | 0.95 | 150 |
| Diaspora | 0.97 | 0.93 | 0.95 | 150 |
| Bank | 0.97 | 0.86 | 0.91 | 150 |
| Technology | 0.98 | 0.99 | 0.99 | 150 |
| Literature | 0.92 | 0.88 | 0.90 | 150 |
| Automobile | 0.93 | 0.97 | 0.95 | 150 |
| World | 0.90 | 0.93 | 0.92 | 150 |
| Market | 0.93 | 0.98 | 0.95 | 150 |
| Lifestyle | 0.99 | 0.96 | 0.97 | 150 |
| Sports | 0.90 | 0.86 | 0.88 | 150 |
| Health | 0.86 | 0.89 | 0.87 | 150 |
| Entertainment | 0.98 | 0.97 | 0.97 | 150 |
| Politics | 0.97 | 0.99 | 0.98 | 150 |
| Tourism | 0.82 | 0.96 | 0.88 | 150 |
| Crime | 0.97 | 0.96 | 0.97 | 150 |
| Education | 0.96 | 0.84 | 0.90 | 150 |
| | | | | |
| accuracy | | | 0.93 | 2400 |
| macro avg | 0.94 | 0.93 | 0.93 | 2400 |
| weighted avg | 0.94 | 0.93 | 0.93 | 2400 |
## Sample Predictions
### Sample Text (Sports)
```
काठमाडौँ — त्रिभुवन आर्मी क्लबले ६ स्वर्ण, २ रजत र ६ कांस्य पदक जित्दै प्रथम वीर गणेशमान सिंह राष्ट्रिय फेन्सिङ प्रतियोगितामा टिम च्याम्पियन ट्रफी जितेको छ ।
दोस्रो भएको एपीएफले ३ स्वर्ण, ५ रजत र ८ कांस्य जित्यो । वाग्मती प्रदेशले ३ स्वर्ण, ५ रजत र ३ कांस्य जित्दै तेस्रो स्थान हात पार्यो ।
वीर गणेशमान सिंह स्पोर्ट्स कमिटी र नेपाल फेन्सिङ संघको संयुक्त आयोजनामा भएको प्रतियोगिताको महिला फोइलतर्फ एपीएफकी मन्दिरा थापाले स्वर्ण जितिन् । उनले फाइनलमा चिरप्रतिद्वन्द्वी सेनाकी रमा सिंहलाई १५–१२ ले हराइन् । आर्मीकी मनीषा राई र वाग्मतीकी अञ्जु तामाङ तेस्रो भए ।
पुरुषको टिम फोइलतर्फ आर्मीले स्वर्ण जित्यो । आर्मीले वाग्मतीलाई ४५–२९ स्कोरले हरायो । गण्डकी र एपीएफले कांस्य जिते ।
टिम महिला सावरमा आर्मीले स्वर्ण जित्यो । फाइनलमा आर्मीले एपीएफलाई ४५–३६ स्कोरले हराएर स्वर्ण जितेको हो । वाग्मती र गण्डकीले कांस्य जिते ।
महिला टिम फोइलतर्फ एपीएफले वाग्मती प्रदेशलाई ४५–३६ अंकले हरायो । आर्मी र प्रदेश १ तेस्रो भए ।
पुरुष इपी टिमतर्फ आर्मीले एपीएफलाई ४५–४० अंकले पराजित गर्दै स्वर्ण हात जित्यो ।
```
**Predicted Outputs**
```
***** Running Prediction *****
Num examples = 1
Batch size = 8
Predicted Category: Sports
```
### Sample Text (RU-UKR issue)
```
रूसी आक्रमणका कारण शरणार्थी जीवन बिताउन बाध्य युक्रेनीलाई
छिमेकी देशहरुले खाने, बस्नेलगायतका आधारभूत आवश्यकता उपलब्ध गराइरहेका छन्
जेनेभा — युक्रेनमा रुसले आक्रमण सुरु गरेयता २० लाख सर्वसाधारणले देश छाडेका छन् । शरणार्थीसम्बन्धी संयुक्त राष्ट्रसंघीय निकायका अुनसार विस्थापितहरू पोल्यान्ड, हंगेरी, स्लोभाकिया, मोल्दोभा, रोमानिया पुगेका छन् ।
कम्तीमा १२ लाख ४० हजार जना छिमेकी देश पोल्यान्ड पुगेको जनाइएको छ ।
त्यसैगरी, १ लाख ९१ हजार जना हंगेरी पुगेका छन् । १ लाख ४१ हजार स्लोभाकिया, ८३ हजार मोल्दोभा र ८२ हजार रोमानिया पुगेका छन् ।
त्यस्तै, रुस जानेको संख्या ९९ हजार ३ सय पुगेको छ ।
```
**Predicted Outputs**
```
***** Running Prediction *****
Num examples = 1
Batch size = 8
Predicted Category: World
``` | 6,384 |
Daniel-Saeedi/YouAreFakeNews | null | ---
license: mit
---
| 24 |
morenolq/spotify-podcast-advertising-classification | null | ---
language: "en"
datasets:
- Spotify Podcasts Dataset
tags:
- bert
- classification
- pytorch
pipeline:
- text-classification
widget:
- text: "__START__ [SEP] This is the first podcast on natural language processing applied to spoken language."
- text: "This is the first podcast on natural language processing applied to spoken language. [SEP] You can find us on https://twitter.com/PodcastExampleClassifier."
- text: "You can find us on https://twitter.com/PodcastExampleClassifier. [SEP] You can also subscribe to our newsletter https://newsletter.com/PodcastExampleClassifier."
---
**General Information**
This is a `bert-base-cased`, binary classification model, fine-tuned to classify a given sentence as containing advertising content or not. It leverages previous-sentence context to make more accurate predictions.
The model is used in the paper 'Leveraging multimodal content for podcast summarization' published at ACM SAC 2022.
**Usage:**
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained('morenolq/spotify-podcast-advertising-classification')
tokenizer = AutoTokenizer.from_pretrained('morenolq/spotify-podcast-advertising-classification')
desc_sentences = ["Sentence 1", "Sentence 2", "Sentence 3"]
for i, s in enumerate(desc_sentences):
if i==0:
context = "__START__"
else:
context = desc_sentences[i-1]
out = tokenizer(context, text, padding = "max_length",
max_length = 256,
truncation=True,
return_attention_mask=True,
return_tensors = 'pt')
outputs = model(**out)
print (f"{s},{outputs}")
```
The manually annotated data, used for model fine-tuning are available [here](https://github.com/MorenoLaQuatra/MATeR/blob/main/description_sentences_classification.tsv)
Hereafter is the classification report of the model evaluation on the test split:
```
precision recall f1-score support
0 0.95 0.93 0.94 256
1 0.88 0.91 0.89 140
accuracy 0.92 396
macro avg 0.91 0.92 0.92 396
weighted avg 0.92 0.92 0.92 396
```
If you find it useful, please cite the following paper:
```bibtex
@inproceedings{10.1145/3477314.3507106,
author = {Vaiani, Lorenzo and La Quatra, Moreno and Cagliero, Luca and Garza, Paolo},
title = {Leveraging Multimodal Content for Podcast Summarization},
year = {2022},
isbn = {9781450387132},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477314.3507106},
doi = {10.1145/3477314.3507106},
booktitle = {Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing},
pages = {863–870},
numpages = {8},
keywords = {multimodal learning, multimodal features fusion, extractive summarization, deep learning, podcast summarization},
location = {Virtual Event},
series = {SAC '22}
}
``` | 3,149 |
Cheltone/BERT_Base_Finetuned_C19Vax | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- accuracy
- f1
model-index:
- name: Bert_Test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert_Test
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1965
- Precision: 0.9332
- Accuracy: 0.9223
- F1: 0.9223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:--------:|:------:|
| 0.6717 | 0.4 | 500 | 0.6049 | 0.7711 | 0.6743 | 0.6112 |
| 0.5704 | 0.8 | 1000 | 0.5299 | 0.7664 | 0.7187 | 0.6964 |
| 0.52 | 1.2 | 1500 | 0.4866 | 0.7698 | 0.7537 | 0.7503 |
| 0.4792 | 1.6 | 2000 | 0.4292 | 0.8031 | 0.793 | 0.7927 |
| 0.4332 | 2.0 | 2500 | 0.3920 | 0.8318 | 0.8203 | 0.8198 |
| 0.381 | 2.4 | 3000 | 0.3723 | 0.9023 | 0.8267 | 0.8113 |
| 0.3625 | 2.8 | 3500 | 0.3134 | 0.8736 | 0.8607 | 0.8601 |
| 0.3325 | 3.2 | 4000 | 0.2924 | 0.8973 | 0.871 | 0.8683 |
| 0.3069 | 3.6 | 4500 | 0.2671 | 0.8916 | 0.8847 | 0.8851 |
| 0.2866 | 4.0 | 5000 | 0.2571 | 0.8920 | 0.8913 | 0.8926 |
| 0.2595 | 4.4 | 5500 | 0.2450 | 0.8980 | 0.9 | 0.9015 |
| 0.2567 | 4.8 | 6000 | 0.2246 | 0.9057 | 0.9043 | 0.9054 |
| 0.2255 | 5.2 | 6500 | 0.2263 | 0.9332 | 0.905 | 0.9030 |
| 0.2237 | 5.6 | 7000 | 0.2083 | 0.9265 | 0.9157 | 0.9156 |
| 0.2248 | 6.0 | 7500 | 0.2039 | 0.9387 | 0.9193 | 0.9185 |
| 0.2086 | 6.4 | 8000 | 0.2038 | 0.9436 | 0.9193 | 0.9181 |
| 0.2029 | 6.8 | 8500 | 0.1965 | 0.9332 | 0.9223 | 0.9223 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 2,787 |
Eugen/distilbert-base-uncased-finetuned-stsb | [
"LABEL_0"
] | Entry not found | 15 |
gzomer/claim-spotter | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: claim-spotter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# claim-spotter
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3266
- F1: 0.8709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3697 | 1.0 | 830 | 0.2728 | 0.8589 |
| 0.1475 | 2.0 | 1660 | 0.3266 | 0.8709 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,372 |
Manishkalra/finetuning-sentiment-model-4000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-4000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9
- name: F1
type: f1
value: 0.9038461538461539
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-4000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2706
- Accuracy: 0.9
- F1: 0.9038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,503 |
ChrisZeng/electra-large-discriminator-nli-efl-tweeteval | [
"contradiction",
"entailment",
"neutral"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electra-large-discriminator-nli-efl-tweeteval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-large-discriminator-nli-efl-tweeteval
This model is a fine-tuned version of [ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli](https://huggingface.co/ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli) on the None dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.7943
- F1: 0.7872
- Loss: 0.3004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|
| 0.4384 | 1.0 | 163 | 0.7444 | 0.7308 | 0.3962 |
| 0.3447 | 2.0 | 326 | 0.7659 | 0.7552 | 0.3410 |
| 0.3057 | 3.0 | 489 | 0.7750 | 0.7688 | 0.3234 |
| 0.287 | 4.0 | 652 | 0.7857 | 0.7779 | 0.3069 |
| 0.2742 | 5.0 | 815 | 0.7887 | 0.7822 | 0.3030 |
| 0.2676 | 6.0 | 978 | 0.7939 | 0.7851 | 0.2982 |
| 0.2585 | 7.0 | 1141 | 0.7909 | 0.7822 | 0.3002 |
| 0.2526 | 8.0 | 1304 | 0.7943 | 0.7876 | 0.3052 |
| 0.2479 | 9.0 | 1467 | 0.7939 | 0.7847 | 0.2997 |
| 0.2451 | 10.0 | 1630 | 0.7956 | 0.7873 | 0.3014 |
| 0.2397 | 11.0 | 1793 | 0.7943 | 0.7872 | 0.3004 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.0.dev20220417
- Datasets 2.1.0
- Tokenizers 0.10.3
| 2,284 |
Intel/electra-small-discriminator-mrpc-int8-static | [
"0",
"1"
] | ---
language:
- en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- glue
metrics:
- f1
model-index:
- name: electra-small-discriminator-mrpc-int8-static
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.900709219858156
---
# INT8 electra-small-discriminator-mrpc
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [electra-small-discriminator-mrpc](https://huggingface.co/Intel/electra-small-discriminator-mrpc).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so
the real sampling size is 304.
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9007|0.8983|
| **Model size (MB)** |14|51.8|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/electra-small-discriminator-mrpc-int8-static',
)
```
| 1,326 |
dmjimenezbravo/electricidad-small-finetuned-restaurant-sentiment-analysis-usElectionTweets1Jul11Nov-spanish | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electricidad-small-finetuned-restaurant-sentiment-analysis-usElectionTweets1Jul11Nov-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-finetuned-restaurant-sentiment-analysis-usElectionTweets1Jul11Nov-spanish
This model is a fine-tuned version of [mrm8488/electricidad-small-finetuned-restaurant-sentiment-analysis](https://huggingface.co/mrm8488/electricidad-small-finetuned-restaurant-sentiment-analysis) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3534
- Accuracy: 0.7585
- F1: 0.7585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.8145 | 1.0 | 1222 | 0.7033 | 0.7168 | 0.7168 |
| 0.7016 | 2.0 | 2444 | 0.5936 | 0.7731 | 0.7731 |
| 0.6183 | 3.0 | 3666 | 0.5190 | 0.8046 | 0.8046 |
| 0.5516 | 4.0 | 4888 | 0.4678 | 0.8301 | 0.8301 |
| 0.4885 | 5.0 | 6110 | 0.3670 | 0.8713 | 0.8713 |
| 0.4353 | 6.0 | 7332 | 0.3119 | 0.8987 | 0.8987 |
| 0.3957 | 7.0 | 8554 | 0.2908 | 0.9084 | 0.9084 |
| 0.3386 | 8.0 | 9776 | 0.2108 | 0.9348 | 0.9348 |
| 0.2976 | 9.0 | 10998 | 0.1912 | 0.9422 | 0.9422 |
| 0.2828 | 10.0 | 12220 | 0.1496 | 0.9591 | 0.9591 |
| 0.243 | 11.0 | 13442 | 0.1326 | 0.9639 | 0.9639 |
| 0.2049 | 12.0 | 14664 | 0.1249 | 0.9693 | 0.9693 |
| 0.2041 | 13.0 | 15886 | 0.1049 | 0.9752 | 0.9752 |
| 0.1855 | 14.0 | 17108 | 0.0816 | 0.9798 | 0.9798 |
| 0.1637 | 15.0 | 18330 | 0.0733 | 0.9836 | 0.9836 |
| 0.1531 | 16.0 | 19552 | 0.0577 | 0.9880 | 0.9880 |
| 0.1221 | 17.0 | 20774 | 0.0581 | 0.9895 | 0.9895 |
| 0.1207 | 18.0 | 21996 | 0.0463 | 0.9903 | 0.9903 |
| 0.1152 | 19.0 | 23218 | 0.0472 | 0.9908 | 0.9908 |
| 0.1028 | 20.0 | 24440 | 0.0356 | 0.9936 | 0.9936 |
| 0.1027 | 21.0 | 25662 | 0.0278 | 0.9957 | 0.9957 |
| 0.0915 | 22.0 | 26884 | 0.0344 | 0.9946 | 0.9946 |
| 0.0887 | 23.0 | 28106 | 0.0243 | 0.9954 | 0.9954 |
| 0.0713 | 24.0 | 29328 | 0.0208 | 0.9969 | 0.9969 |
| 0.0749 | 25.0 | 30550 | 0.0198 | 0.9964 | 0.9964 |
| 0.0699 | 26.0 | 31772 | 0.0153 | 0.9969 | 0.9969 |
| 0.0567 | 27.0 | 32994 | 0.0144 | 0.9972 | 0.9972 |
| 0.0613 | 28.0 | 34216 | 0.0105 | 0.9982 | 0.9982 |
| 0.0567 | 29.0 | 35438 | 0.0117 | 0.9982 | 0.9982 |
| 0.0483 | 30.0 | 36660 | 0.0072 | 0.9985 | 0.9985 |
| 0.0469 | 31.0 | 37882 | 0.0063 | 0.9987 | 0.9987 |
| 0.0485 | 32.0 | 39104 | 0.0067 | 0.9985 | 0.9985 |
| 0.0464 | 33.0 | 40326 | 0.0020 | 0.9995 | 0.9995 |
| 0.0472 | 34.0 | 41548 | 0.0036 | 0.9995 | 0.9995 |
| 0.0388 | 35.0 | 42770 | 0.0016 | 0.9995 | 0.9995 |
| 0.0248 | 36.0 | 43992 | 0.0047 | 0.9990 | 0.9990 |
| 0.0396 | 37.0 | 45214 | 0.0004 | 0.9997 | 0.9997 |
| 0.0331 | 38.0 | 46436 | 0.0020 | 0.9995 | 0.9995 |
| 0.0292 | 39.0 | 47658 | 0.0000 | 1.0 | 1.0 |
| 0.0253 | 40.0 | 48880 | 0.0001 | 1.0 | 1.0 |
| 0.0285 | 41.0 | 50102 | 0.0000 | 1.0 | 1.0 |
| 0.0319 | 42.0 | 51324 | 0.0000 | 1.0 | 1.0 |
| 0.0244 | 43.0 | 52546 | 0.0000 | 1.0 | 1.0 |
| 0.0261 | 44.0 | 53768 | 0.0001 | 1.0 | 1.0 |
| 0.0256 | 45.0 | 54990 | 0.0000 | 1.0 | 1.0 |
| 0.0258 | 46.0 | 56212 | 0.0000 | 1.0 | 1.0 |
| 0.0173 | 47.0 | 57434 | 0.0000 | 1.0 | 1.0 |
| 0.0253 | 48.0 | 58656 | 0.0000 | 1.0 | 1.0 |
| 0.0241 | 49.0 | 59878 | 0.0000 | 1.0 | 1.0 |
| 0.019 | 50.0 | 61100 | 0.0000 | 1.0 | 1.0 |
| 0.0184 | 51.0 | 62322 | 0.0000 | 1.0 | 1.0 |
| 0.0139 | 52.0 | 63544 | 0.0000 | 1.0 | 1.0 |
| 0.0159 | 53.0 | 64766 | 0.0000 | 1.0 | 1.0 |
| 0.0119 | 54.0 | 65988 | 0.0000 | 1.0 | 1.0 |
| 0.0253 | 55.0 | 67210 | 0.0000 | 1.0 | 1.0 |
| 0.0166 | 56.0 | 68432 | 0.0000 | 1.0 | 1.0 |
| 0.0125 | 57.0 | 69654 | 0.0000 | 1.0 | 1.0 |
| 0.0155 | 58.0 | 70876 | 0.0000 | 1.0 | 1.0 |
| 0.0106 | 59.0 | 72098 | 0.0000 | 1.0 | 1.0 |
| 0.0083 | 60.0 | 73320 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 5,853 |
Ninh/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9241543444176422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 0.924
- F1: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8028 | 1.0 | 250 | 0.3015 | 0.91 | 0.9089 |
| 0.2382 | 2.0 | 500 | 0.2144 | 0.924 | 0.9242 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,805 |
beltran/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8566666666666667
- name: F1
type: f1
value: 0.8571428571428571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3185
- Accuracy: 0.8567
- F1: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,521 |
Tititun/consumer_category | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | Entry not found | 15 |
Tititun/consumer_super | [
"UNKNOWN",
"UNKNOWN_2",
"UNKNOWN_3",
"babies",
"bakery",
"beef",
"beer",
"biscuits",
"body_care",
"butter",
"cats",
"cereal",
"cheese",
"chips",
"chocolate",
"cleaning",
"coffee",
"deodorant",
"diapers",
"dogs",
"eggs",
"electronics",
"energy_drinks",
"feminine_care",
... | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: consumer_super
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# consumer_super
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,019 |
CEBaB/roberta-base.CEBaB.absa.exclusive.seed_77 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.absa.exclusive.seed_88 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.absa.inclusive.seed_42 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.absa.inclusive.seed_66 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.absa.inclusive.seed_77 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.absa.inclusive.seed_88 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.absa.inclusive.seed_99 | [
"0",
"1",
"2"
] | Entry not found | 15 |
connectivity/feather_berts_22 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/bert_ft_qqp-6 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-17 | null | Entry not found | 15 |
Jrico1981/sentiment-classification | null | welcome to my sentiment classification model
model trained with the bert-base-uncased base to classify the sentiment of customers who respond to the satisfaction survey. The sentiments that it classifies are positive (1) and negative (0). | 239 |
arrandi/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.934
- name: F1
type: f1
value: 0.9341704717427723
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1652
- Accuracy: 0.934
- F1: 0.9342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2606 | 1.0 | 250 | 0.1780 | 0.9285 | 0.9284 |
| 0.1486 | 2.0 | 500 | 0.1652 | 0.934 | 0.9342 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,805 |
caldana/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.927055679622598
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2236
- Accuracy: 0.927
- F1: 0.9271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8251 | 1.0 | 250 | 0.3264 | 0.9015 | 0.8981 |
| 0.2534 | 2.0 | 500 | 0.2236 | 0.927 | 0.9271 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,798 |
Clody0071/camembert-base-finetuned-paraphrase | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- pawsx
metrics:
- accuracy
- f1
model-index:
- name: camembert-base-finetuned-paraphrase
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: pawsx
type: pawsx
args: fr
metrics:
- name: Accuracy
type: accuracy
value: 0.9085
- name: F1
type: f1
value: 0.9088724090678741
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-paraphrase
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the pawsx dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2708
- Accuracy: 0.9085
- F1: 0.9089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3918 | 1.0 | 772 | 0.3211 | 0.869 | 0.8696 |
| 0.2103 | 2.0 | 1544 | 0.2448 | 0.9075 | 0.9077 |
| 0.1622 | 3.0 | 2316 | 0.2577 | 0.9055 | 0.9059 |
| 0.1344 | 4.0 | 3088 | 0.2708 | 0.9085 | 0.9089 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,898 |
bsenker/autotrain-sentanaly-1016134101 | [
"negative",
"objective",
"positive"
] | ---
tags: autotrain
language: tr
widget:
- text: "I love AutoTrain 🤗"
datasets:
- bsenker/autotrain-data-sentanaly
co2_eq_emissions: 2.4274113973426568
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1016134101
- CO2 Emissions (in grams): 2.4274113973426568
## Validation Metrics
- Loss: 0.8357052803039551
- Accuracy: 0.6425438596491229
- Macro F1: 0.6449751139113629
- Micro F1: 0.6425438596491229
- Weighted F1: 0.644975113911363
- Macro Precision: 0.6642782595845687
- Micro Precision: 0.6425438596491229
- Weighted Precision: 0.6642782595845685
- Macro Recall: 0.6425438596491229
- Micro Recall: 0.6425438596491229
- Weighted Recall: 0.6425438596491229
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/bsenker/autotrain-sentanaly-1016134101
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bsenker/autotrain-sentanaly-1016134101", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bsenker/autotrain-sentanaly-1016134101", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,391 |
davidcechak/DNADeberta_finedemo_human_or_worm | null | Entry not found | 15 |
alk/roberta-large-mnli-finetuned-header-classifier | [
"CONTRADICTION",
"ENTAILMENT",
"NEUTRAL"
] | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-large-mnli-finetuned-header-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-mnli-finetuned-header-classifier
This model is a fine-tuned version of [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,182 |
andreaschandra/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240890586429673
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2186
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8218 | 1.0 | 250 | 0.3165 | 0.9025 | 0.9001 |
| 0.2494 | 2.0 | 500 | 0.2186 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,804 |
annahaz/xlm-roberta-base-finetuned-misogyny-sexism | [
"0",
"1"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-finetuned-misogyny-sexism
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-misogyny-sexism
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0211
- Accuracy: 0.9949
- F1: 0.9948
- Precision: 0.9906
- Recall: 0.9989
- Mae: 0.0051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3841 | 1.0 | 2778 | 0.4443 | 0.8027 | 0.7919 | 0.8030 | 0.7811 | 0.1973 |
| 0.334 | 2.0 | 5556 | 0.4946 | 0.8058 | 0.8272 | 0.7225 | 0.9674 | 0.1942 |
| 0.2995 | 3.0 | 8334 | 0.2693 | 0.8912 | 0.8951 | 0.8344 | 0.9653 | 0.1088 |
| 0.2675 | 4.0 | 11112 | 0.2575 | 0.9145 | 0.9168 | 0.8612 | 0.98 | 0.0855 |
| 0.2263 | 5.0 | 13890 | 0.1100 | 0.9611 | 0.9598 | 0.9514 | 0.9684 | 0.0389 |
| 0.2089 | 6.0 | 16668 | 0.0999 | 0.9712 | 0.9706 | 0.9524 | 0.9895 | 0.0288 |
| 0.1871 | 7.0 | 19446 | 0.0644 | 0.9782 | 0.9774 | 0.9769 | 0.9779 | 0.0218 |
| 0.1795 | 8.0 | 22224 | 0.0264 | 0.9924 | 0.9922 | 0.9865 | 0.9979 | 0.0076 |
| 0.144 | 9.0 | 25002 | 0.0231 | 0.9924 | 0.9922 | 0.9855 | 0.9989 | 0.0076 |
| 0.1296 | 10.0 | 27780 | 0.0211 | 0.9949 | 0.9948 | 0.9906 | 0.9989 | 0.0051 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| 2,495 |
jogonba2/62244 | [
"COPD signs",
"COVID 19",
"COVID 19 uncertain",
"NSG tube",
"abnormal foreign body",
"adenopathy",
"air bronchogram",
"air fluid level",
"air trapping",
"alveolar pattern",
"aortic atheromatosis",
"aortic button enlargement",
"aortic elongation",
"aortic endoprosthesis",
"apical pleural ... | Entry not found | 15 |
aatmasidha/distilbert-base-uncased-newsmodelclassification | [
"Sadness",
"Joy",
"Love",
"Anger",
"Fear",
"Surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-newsmodelclassification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9278415074713384
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-newsmodelclassification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2177
- Accuracy: 0.928
- F1: 0.9278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8104 | 1.0 | 250 | 0.3057 | 0.9105 | 0.9084 |
| 0.2506 | 2.0 | 500 | 0.2177 | 0.928 | 0.9278 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,817 |
Johny201/autotrain-article_pred-1142742075 | [
"0",
"1"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Johny201/autotrain-data-article_pred
co2_eq_emissions: 3.973071565343572
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1142742075
- CO2 Emissions (in grams): 3.973071565343572
## Validation Metrics
- Loss: 0.6098461151123047
- Accuracy: 0.7227722772277227
- Precision: 0.6805555555555556
- Recall: 0.9074074074074074
- AUC: 0.7480299448384554
- F1: 0.7777777777777779
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Johny201/autotrain-article_pred-1142742075
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Johny201/autotrain-article_pred-1142742075", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Johny201/autotrain-article_pred-1142742075", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,191 |
anahitapld/dbd_bert | null | ---
license: apache-2.0
---
| 28 |
Alstractor/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5343023846000738
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7272
- Matthews Correlation: 0.5343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5219 | 1.0 | 535 | 0.5340 | 0.4215 |
| 0.3467 | 2.0 | 1070 | 0.5131 | 0.5181 |
| 0.2331 | 3.0 | 1605 | 0.6406 | 0.5040 |
| 0.1695 | 4.0 | 2140 | 0.7272 | 0.5343 |
| 0.1212 | 5.0 | 2675 | 0.8399 | 0.5230 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,999 |
CenIA/bert-base-spanish-wwm-cased-finetuned-xnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Doogie/waynehills_sentimental_kor | null | Entry not found | 15 |
M47Labs/it_iptc | [
"arts/culture/entertainment and media",
"conflict/war and peace",
"crime/law and justice",
"disaster/accident and emergency incident",
"economy/business and finance",
"education",
"environment",
"health",
"human interest",
"labour",
"lifestyle and leisure",
"politics",
"religion and belief",... | Entry not found | 15 |
Maelstrom77/roberta-large-qqp | null | Entry not found | 15 |
NDugar/2epochv3mlni | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` | 4,788 |
NathanZhu/GabHateCorpusTrained | [
"LABEL_0",
"LABEL_1"
] | Test for use in Google Colab :'( | 32 |
Osiris/neutral_non_neutral_classifier | null | ### Introduction:
This model belongs to text-classification. You can check whether the sentence consists any emotion.
### Label Explaination:
LABEL_1: Non Neutral (have some emotions)
LABEL_0: Neutral (have no emotion)
### Usage:
```python
>>> from transformers import pipeline
>>> nnc = pipeline('text-classification', model='Osiris/neutral_non_neutral_classifier')
>>> nnc("Hello, I'm a good model.")
```
### Accuracy:
We reach 93.98% for validation dataset, and 91.92% for test dataset. | 491 |
SetFit/distilbert-base-uncased__sst2__train-16-4 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1501
- Accuracy: 0.6387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7043 | 1.0 | 7 | 0.7139 | 0.2857 |
| 0.68 | 2.0 | 14 | 0.7398 | 0.2857 |
| 0.641 | 3.0 | 21 | 0.7723 | 0.2857 |
| 0.5424 | 4.0 | 28 | 0.8391 | 0.2857 |
| 0.5988 | 5.0 | 35 | 0.7761 | 0.2857 |
| 0.3698 | 6.0 | 42 | 0.7707 | 0.4286 |
| 0.3204 | 7.0 | 49 | 0.8290 | 0.4286 |
| 0.2882 | 8.0 | 56 | 0.6551 | 0.5714 |
| 0.1512 | 9.0 | 63 | 0.5652 | 0.5714 |
| 0.1302 | 10.0 | 70 | 0.5278 | 0.5714 |
| 0.1043 | 11.0 | 77 | 0.4987 | 0.7143 |
| 0.0272 | 12.0 | 84 | 0.5278 | 0.5714 |
| 0.0201 | 13.0 | 91 | 0.5307 | 0.5714 |
| 0.0129 | 14.0 | 98 | 0.5382 | 0.5714 |
| 0.0117 | 15.0 | 105 | 0.5227 | 0.5714 |
| 0.0094 | 16.0 | 112 | 0.5066 | 0.7143 |
| 0.0104 | 17.0 | 119 | 0.4869 | 0.7143 |
| 0.0069 | 18.0 | 126 | 0.4786 | 0.7143 |
| 0.0062 | 19.0 | 133 | 0.4707 | 0.7143 |
| 0.0065 | 20.0 | 140 | 0.4669 | 0.7143 |
| 0.0051 | 21.0 | 147 | 0.4686 | 0.7143 |
| 0.0049 | 22.0 | 154 | 0.4784 | 0.7143 |
| 0.0046 | 23.0 | 161 | 0.4839 | 0.7143 |
| 0.0039 | 24.0 | 168 | 0.4823 | 0.7143 |
| 0.0044 | 25.0 | 175 | 0.4791 | 0.7143 |
| 0.0037 | 26.0 | 182 | 0.4778 | 0.7143 |
| 0.0038 | 27.0 | 189 | 0.4770 | 0.7143 |
| 0.0036 | 28.0 | 196 | 0.4750 | 0.7143 |
| 0.0031 | 29.0 | 203 | 0.4766 | 0.7143 |
| 0.0031 | 30.0 | 210 | 0.4754 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 3,223 |
TehranNLP-org/roberta-base-qqp-2e-5-42 | null | Entry not found | 15 |
Worldman/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9227046184638882
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2162
- Accuracy: 0.9225
- F1: 0.9227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8437 | 1.0 | 250 | 0.3153 | 0.903 | 0.9005 |
| 0.2467 | 2.0 | 500 | 0.2162 | 0.9225 | 0.9227 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cpu
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,805 |
Yanjie/message-intent | [
"goodbye",
"discount",
"can_i_help",
"other",
"escalation",
"goodbye|purchase",
"restock",
"subscription",
"discount|other",
"subscription|removal",
"goodbye|anything_else",
"issue|query_clarification",
"order|query_order_number",
"shipping|policy",
"shopping|query_link_item",
"shoppin... | This is the concierge intent model. Fined tuned on DistilBert uncased model. | 76 |
akilesh96/autonlp-mrcooper_text_classification-529614927 | [
"Animals",
"Compliment",
"Education",
"Health",
"Heavy Emotion",
"Joke",
"Love",
"Politics",
"Religion",
"Science",
"Self"
] | ---
tags: autonlp
language: en
widget:
- text: "Not Many People Know About The City 1200 Feet Below Detroit"
- text: "Bob accepts the challenge, and the next week they're standing in Saint Peters square. 'This isnt gonna work, he's never going to see me here when theres this much people. You stay here, I'll go talk to him and you'll see me on the balcony, the guards know me too.' Half an hour later, Bob and the pope appear side by side on the balcony. Bobs boss gets a heart attack, and Bob goes to visit him in the hospital."
- text: "I’m sorry if you made it this far, but I’m just genuinely idk, I feel like I shouldn’t give up, it’s just getting harder to come back from stuff like this."
datasets:
- akilesh96/autonlp-data-mrcooper_text_classification
co2_eq_emissions: 5.999771405025692
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 529614927
- CO2 Emissions (in grams): 5.999771405025692
## Validation Metrics
- Loss: 0.7582379579544067
- Accuracy: 0.7636103151862464
- Macro F1: 0.770630619486531
- Micro F1: 0.7636103151862464
- Weighted F1: 0.765233270165301
- Macro Precision: 0.7746285216467107
- Micro Precision: 0.7636103151862464
- Weighted Precision: 0.7683270753840836
- Macro Recall: 0.7680576576961138
- Micro Recall: 0.7636103151862464
- Weighted Recall: 0.7636103151862464
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/akilesh96/autonlp-mrcooper_text_classification-529614927
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 2,081 |
alexander-karpov/bert-eatable-classification-en-ru | null | Entry not found | 15 |
anindabitm/sagemaker-distilbert-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9165
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2434
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9423 | 1.0 | 500 | 0.2434 | 0.9165 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,668 |
anirudh21/bert-base-uncased-finetuned-qnli | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.791689547867472
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-qnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6268
- Accuracy: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 0.5339 | 0.7620 |
| No log | 2.0 | 126 | 0.4728 | 0.7866 |
| No log | 3.0 | 189 | 0.5386 | 0.7847 |
| No log | 4.0 | 252 | 0.6096 | 0.7904 |
| No log | 5.0 | 315 | 0.6268 | 0.7917 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
| 1,843 |
asalics/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9244145121183605
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.924
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7914 | 1.0 | 250 | 0.3032 | 0.905 | 0.9030 |
| 0.2379 | 2.0 | 500 | 0.2207 | 0.924 | 0.9244 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,805 |
ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa | [
"POSITIVE",
"NEUTRAL",
"NEGATIVE"
] | ---
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
model-index:
- name: roberta-base-indonesian-1.5G-sentiment-analysis-smsa
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9261904761904762
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-indonesian-1.5G-sentiment-analysis-smsa
This model is a fine-tuned version of [cahya/roberta-base-indonesian-1.5G](https://huggingface.co/cahya/roberta-base-indonesian-1.5G) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4294
- Accuracy: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6461 | 1.0 | 688 | 0.2620 | 0.9087 |
| 0.2627 | 2.0 | 1376 | 0.2291 | 0.9151 |
| 0.1784 | 3.0 | 2064 | 0.2891 | 0.9167 |
| 0.1099 | 4.0 | 2752 | 0.3317 | 0.9230 |
| 0.0857 | 5.0 | 3440 | 0.4294 | 0.9262 |
| 0.0346 | 6.0 | 4128 | 0.4759 | 0.9246 |
| 0.0221 | 7.0 | 4816 | 0.4946 | 0.9206 |
| 0.006 | 8.0 | 5504 | 0.5823 | 0.9175 |
| 0.0047 | 9.0 | 6192 | 0.5777 | 0.9159 |
| 0.004 | 10.0 | 6880 | 0.5800 | 0.9175 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 2,255 |
bioformers/bioformer-cased-v1.0-qnli | [
"entailment",
"not_entailment"
] | [bioformer-cased-v1.0](https://huggingface.co/bioformers/bioformer-cased-v1.0) fined-tuned on the [QNLI](https://huggingface.co/datasets/glue) dataset for 2 epochs.
The fine-tuning process was performed on two NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=16
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.883397
## More information
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.
(source: https://paperswithcode.com/dataset/qnli)
Original GLUE paper: https://arxiv.org/abs/1804.07461 | 1,570 |
blizrys/biobert-base-cased-v1.1-finetuned-pubmedqa | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: biobert-base-cased-v1.1-finetuned-pubmedqa
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.1-finetuned-pubmedqa
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3182
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8591 | 0.58 |
| No log | 2.0 | 114 | 0.9120 | 0.58 |
| No log | 3.0 | 171 | 0.8159 | 0.62 |
| No log | 4.0 | 228 | 1.1651 | 0.54 |
| No log | 5.0 | 285 | 1.2350 | 0.6 |
| No log | 6.0 | 342 | 1.5563 | 0.68 |
| No log | 7.0 | 399 | 2.0233 | 0.58 |
| No log | 8.0 | 456 | 2.2054 | 0.5 |
| 0.4463 | 9.0 | 513 | 2.2434 | 0.5 |
| 0.4463 | 10.0 | 570 | 2.3182 | 0.5 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| 2,100 |
chitra/finetuned-adversarial-paraphrase-modell | null | Entry not found | 15 |
emrecan/bert-base-turkish-cased-multinli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
| 332 |
federicopascual/finetune-sentiment-analysis-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetune-sentiment-analysis-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8866666666666667
- name: F1
type: f1
value: 0.8944099378881988
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-sentiment-analysis-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4558
- Accuracy: 0.8867
- F1: 0.8944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,536 |
federicopascual/finetuning-sentiment-analysis-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-analysis-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.88125
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-analysis-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3130
- Accuracy: 0.8733
- F1: 0.8812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,529 |
gchhablani/fnet-base-finetuned-wnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5492957746478874
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-wnli
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6887
- Accuracy: 0.5493
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name wnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir fnet-base-finetuned-wnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7052 | 1.0 | 40 | 0.6902 | 0.5634 |
| 0.6957 | 2.0 | 80 | 0.7013 | 0.4366 |
| 0.6898 | 3.0 | 120 | 0.6898 | 0.5352 |
| 0.6958 | 4.0 | 160 | 0.6874 | 0.5634 |
| 0.6982 | 5.0 | 200 | 0.6887 | 0.5493 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| 2,751 |
huaen/question_detection | [
"non_question",
"question"
] | Entry not found | 15 |
juliensimon/autonlp-imdb-demo-hf-16622775 | [
"0",
"1"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- juliensimon/autonlp-data-imdb-demo-hf
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16622775
## Validation Metrics
- Loss: 0.18653589487075806
- Accuracy: 0.9408
- Precision: 0.9537643207855974
- Recall: 0.9272076372315036
- AUC: 0.985847396174344
- F1: 0.9402985074626865
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-imdb-demo-hf-16622775
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,083 |
kurianbenoy/distilbert-base-uncased-finetuned-imdb | [
"neg",
"pos"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.923
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3073
- Accuracy: 0.923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2744 | 1.0 | 1563 | 0.2049 | 0.921 |
| 0.1572 | 2.0 | 3126 | 0.2308 | 0.923 |
| 0.0917 | 3.0 | 4689 | 0.3073 | 0.923 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,736 |
laboro-ai/distilbert-base-japanese-finetuned-livedoor | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8"
] | ---
language: ja
tags:
- distilbert
license: cc-by-nc-4.0
---
| 64 |
lucasresck/bert-base-cased-ag-news | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
language:
- en
license: mit
tags:
- bert
- classification
datasets:
- ag_news
metrics:
- accuracy
- f1
- recall
- precision
widget:
- text: "Is it soccer or football?"
example_title: "Sports"
- text: "A new version of Ubuntu was released."
example_title: "Sci/Tech"
---
# bert-base-cased-ag-news
BERT model fine-tuned on AG News classification dataset using a linear layer on top of the [CLS] token output, with 0.945 test accuracy.
### How to use
Here is how to use this model to classify a given text:
```python
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('lucasresck/bert-base-cased-ag-news')
model = BertForSequenceClassification.from_pretrained('lucasresck/bert-base-cased-ag-news')
text = "Is it soccer or football?"
encoded_input = tokenizer(text, return_tensors='pt', truncation=True, max_length=512)
output = model(**encoded_input)
```
### Limitations and bias
Bias were not assessed in this model, but, considering that pre-trained BERT is known to carry bias, it is also expected for this model. BERT's authors say: "This bias will also affect all fine-tuned versions of this model."
## Evaluation results
```
precision recall f1-score support
0 0.9539 0.9584 0.9562 1900
1 0.9884 0.9879 0.9882 1900
2 0.9251 0.9095 0.9172 1900
3 0.9127 0.9242 0.9184 1900
accuracy 0.9450 7600
macro avg 0.9450 0.9450 0.9450 7600
weighted avg 0.9450 0.9450 0.9450 7600
```
| 1,643 |
mrm8488/distilroberta-base-finetuned-suicide-depression | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
widget:
- text: "It's in the back of my mind. I'm not sure I'll be ok. Not sure I can deal with this. I'll try...I will try. Even though it's hard to see the point. But...this still isn't off the table."
model-index:
- name: distilroberta-base-finetuned-suicide-depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-suicide-depression
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6622
- Accuracy: 0.7158
## Model description
Just a **POC** of a Transformer fine-tuned on [SDCNL](https://github.com/ayaanzhaque/SDCNL) dataset for suicide (label 1) or depression (label 0) detection in tweets.
**DO NOT use it in production**
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 214 | 0.6204 | 0.6632 |
| No log | 2.0 | 428 | 0.6622 | 0.7158 |
| 0.5244 | 3.0 | 642 | 0.7312 | 0.6684 |
| 0.5244 | 4.0 | 856 | 0.9711 | 0.7105 |
| 0.2876 | 5.0 | 1070 | 1.1620 | 0.7 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.0
- Tokenizers 0.10.3
| 2,015 |
pmthangk09/bert-base-uncased-glue-sst2 | null | Entry not found | 15 |
seongju/kor-3i4k-bert-base-cased | [
"fragment",
"statement",
"question",
"command",
"rhetorical question",
"rhetorical command",
"intonation-depedent utterance"
] | ### Model information
* language : Korean
* fine tuning data : [kor_3i4k](https://huggingface.co/datasets/kor_3i4k)
* License : CC-BY-SA 4.0
* Base model : [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
* input : sentence
* output : intent
----
### Train information
* train_runtime: 2376.638
* train_steps_per_second: 2.175
* train_loss: 0.356829648599977
* epoch: 3.0
----
### How to use
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained (
"seongju/kor-3i4k-bert-base-cased"
)
model = AutoModelForSequenceClassification.from_pretrained (
"seongju/kor-3i4k-bert-base-cased"
)
inputs = tokenizer(
"너는 지금 무엇을 하고 있니?",
padding=True, truncation=True, max_length=128, return_tensors="pt"
)
outputs = model(**inputs)
probs = outputs[0].softmax(1)
output = probs.argmax().item()
``` | 909 |
sismetanin/mbart_ru_sum_gazeta-ru-sentiment-rureviews | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## MBARTRuSumGazeta-ru-sentiment-RuReviews
MBARTRuSumGazeta-ru-sentiment-RuReviews is a [MBARTRuSumGazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
``` | 6,371 |
tals/albert-base-vitaminc_flagging | [
"factual",
"not factual"
] | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
| 2,357 |
textattack/bert-base-cased-STS-B | [
"LABEL_0"
] | ## TextAttack Model Card
This `bert-base-cased` model was fine-tuned for sequence classificationusing TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 3 epochs with a batch size of 128, a learning
rate of 1e-05, and a maximum sequence length of 128.
Since this was a regression task, the model was trained with a mean squared error loss function.
The best score the model achieved on this task was 0.8244429996636282, as measured by the
eval set pearson correlation, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 673 |
textattack/roberta-base-WNLI | null | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5633802816901409, as measured by the
eval set accuracy, found after 0 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 617 |
textattack/xlnet-large-cased-STS-B | [
"LABEL_0"
] | Entry not found | 15 |
trnt/twitter_emotions | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: twitter_emotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_emotions
This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1647
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2486 | 1.0 | 2000 | 0.2115 | 0.931 |
| 0.135 | 2.0 | 4000 | 0.1725 | 0.936 |
| 0.1041 | 3.0 | 6000 | 0.1647 | 0.9375 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,775 |
vidhur2k/mBERT-French-Mono | null | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.