index int64 0 22.3k | modelId stringlengths 8 111 | label list | readme stringlengths 0 385k |
|---|---|---|---|
1,261 | m-newhauser/distilbert-political-tweets | [
"Democrat",
"Republican"
] | ---
language:
- en
license: lgpl-3.0
library_name: transformers
tags:
- text-classification
- transformers
- pytorch
- generated_from_keras_callback
metrics:
- accuracy
- f1
datasets:
- m-newhauser/senator-tweets
widget:
- text: "This pandemic has shown us clearly the vulgarity of our healthcare system. Highest costs in the world, yet not enough nurses or doctors. Many millions uninsured, while insurance company profits soar. The struggle continues. Healthcare is a human right. Medicare for all."
example_title: "Bernie Sanders (D)"
- text: "Team Biden would rather fund the Ayatollah's Death to America regime than allow Americans to produce energy for our own domestic consumption."
example_title: "Ted Cruz (R)"
---
# distilbert-political-tweets 🗣 🇺🇸
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [m-newhauser/senator-tweets](https://huggingface.co/datasets/m-newhauser/senator-tweets) dataset, which contains all tweets made by United States senators during the first year of the Biden Administration.
It achieves the following results on the evaluation set:
* Accuracy: 0.9076
* F1: 0.9117
## Model description
The goal of this model is to classify short pieces of text as having either Democratic or Republican sentiment. The model was fine-tuned on 99,693 tweets (51.6% Democrat, 48.4% Republican) made by US senators in 2021.
Model accuracy may not hold up on pieces of text longer than a tweet.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: Adam
- training_precision: float32
- learning_rate = 5e-5
- num_epochs = 5
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
1,262 | m3hrdadfi/albert-fa-base-v2-clf-digimag | [
"بازی ویدیویی",
"راهنمای خرید",
"سلامت و زیبایی",
"علم و تکنولوژی",
"عمومی",
"هنر و سینما",
"کتاب و ادبیات"
] | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### DigiMag
A total of 8,515 articles scraped from [Digikala Online Magazine](https://www.digikala.com/mag/). This dataset includes seven different classes.
1. Video Games
2. Shopping Guide
3. Health Beauty
4. Science Technology
5. General
6. Art Cinema
7. Books Literature
| Label | # |
|:------------------:|:----:|
| Video Games | 1967 |
| Shopping Guide | 125 |
| Health Beauty | 1610 |
| Science Technology | 2772 |
| General | 120 |
| Art Cinema | 1667 |
| Books Literature | 254 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1YgrCYY-Z0h2z0-PfWVfOGt1Tv0JDI-qz)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT |
|:-----------------:|:-----------------:|:-----------:|:-----:|
| Digikala Magazine | 92.33 | 93.59 | 90.72 |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
1,263 | m3hrdadfi/albert-fa-base-v2-clf-persiannews | [
"اجتماعی",
"اقتصادی",
"بین الملل",
"سیاسی",
"علمی فناوری",
"فرهنگی هنری",
"ورزشی",
"پزشکی"
] | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### Persian News
A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.
1. Economic
2. International
3. Political
4. Science Technology
5. Cultural Art
6. Sport
7. Medical
| Label | # |
|:------------------:|:----:|
| Social | 2170 |
| Economic | 1564 |
| International | 1975 |
| Political | 2269 |
| Science Technology | 2436 |
| Cultural Art | 2558 |
| Sport | 1381 |
| Medical | 2085 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1B6xotfXCcW9xS1mYSBQos7OCg0ratzKC)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT |
|:-----------------:|:-----------------:|:-----------:|:-----:|
| Persian News | 97.01 | 97.19 | 95.79 |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
1,264 | m3hrdadfi/albert-fa-base-v2-sentiment-binary | [
"Negative",
"Positive"
] | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
## Results
The model obtained an F1 score of 87.56% for a composition of all three datasets into a binary-labels `Negative` and `Positive`.
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
1,265 | m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary | [
"negative",
"positive"
] | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
**Binary:**
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
**Multi**
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
| Label | # |
|:---------:|:----:|
| Furious | 236 |
| Angry | 1357 |
| Neutral | 2874 |
| Happy | 2848 |
| Delighted | 2516 |
**Download**
You can download the dataset from:
- [SentiPers](https://github.com/phosseini/sentipers)
- [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
1,266 | m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi | [
"angry",
"delighted",
"furious",
"happy",
"neutral"
] | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
**Binary:**
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
**Multi**
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
| Label | # |
|:---------:|:----:|
| Furious | 236 |
| Angry | 1357 |
| Neutral | 2874 |
| Happy | 2848 |
| Delighted | 2516 |
**Download**
You can download the dataset from:
- [SentiPers](https://github.com/phosseini/sentipers)
- [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
1,267 | m3hrdadfi/albert-fa-base-v2-sentiment-digikala | [
"no_idea",
"not_recommended",
"recommended"
] | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### Digikala
Digikala user comments provided by [Open Data Mining Program (ODMP)](https://www.digikala.com/opendata/). This dataset contains 62,321 user comments with three labels:
| Label | # |
|:---------------:|:------:|
| no_idea | 10394 |
| not_recommended | 15885 |
| recommended | 36042 |
**Download**
You can download the dataset from [here](https://www.digikala.com/opendata/)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| Digikala User Comments | 81.12 | 81.74 | 80.74 | - |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
1,268 | m3hrdadfi/albert-fa-base-v2-sentiment-multi | [
"Negative",
"Neutral",
"Positive"
] | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
## Results
The model obtained an F1 score of 70.72% for a composition of all three datasets into a multi-labels `Negative`, `Neutral` and `Positive`.
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
1,269 | m3hrdadfi/albert-fa-base-v2-sentiment-snappfood | [
"HAPPY",
"SAD"
] | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### SnappFood
[Snappfood](https://snappfood.ir/) (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification):
1. Happy
2. Sad
| Label | # |
|:--------:|:-----:|
| Negative | 35000 |
| Positive | 35000 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=15J4zPN1BD7Q_ZIQ39VeFquwSoW8qTxgu)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| SnappFood User Comments | 85.79 | 88.12 | 87.87 | - |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
1,270 | m3hrdadfi/bert-fa-base-uncased-farstail | [
"contradiction",
"entailment",
"neutral"
] | ---
language: fa
license: apache-2.0
---
# FarsTail + ParsBERT
Please follow the [FarsTail](https://github.com/dml-qom/FarsTail) repo for the latest information about the dataset. For accessing the beneficiary models from this dataset, check out the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo
```bibtex
@article{amirkhani2020farstail,
title={FarsTail: A Persian Natural Language Inference Dataset},
author={Hossein Amirkhani, Mohammad Azari Jafari, Azadeh Amirak, Zohreh Pourjafari, Soroush Faridan Jahromi, and Zeinab Kouhkan},
journal={arXiv preprint arXiv:2009.08820},
year={2020}
}
``` |
1,271 | m3hrdadfi/bert-fa-base-uncased-wikinli | [
"contradiction",
"entailment"
] | ---
language: fa
license: apache-2.0
---
# ParsBERT + Sentence Transformers
Please follow the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo for the latest information about previous and current models.
```bibtex
@misc{SentenceTransformerWiki,
author = {Mehrdad Farahani},
title = {Sentence Embeddings with ParsBERT},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {https://github.com/m3hrdadfi/sentence-transformers},
}
``` |
1,272 | m3hrdadfi/zabanshenas-roberta-base-mix | [
"ace",
"afr",
"als",
"amh",
"ang",
"ara",
"arg",
"arz",
"asm",
"ast",
"ava",
"aym",
"azb",
"aze",
"bak",
"bar",
"bcl",
"be-tarask",
"bel",
"ben",
"bho",
"bjn",
"bod",
"bos",
"bpy",
"bre",
"bul",
"bxr",
"cat",
"cbk",
"cdo",
"ceb",
"ces",
"che",
"chr... | ---
language:
- multilingual
- ace
- afr
- als
- amh
- ang
- ara
- arg
- arz
- asm
- ast
- ava
- aym
- azb
- aze
- bak
- bar
- bcl
- bel
- ben
- bho
- bjn
- bod
- bos
- bpy
- bre
- bul
- bxr
- cat
- cbk
- cdo
- ceb
- ces
- che
- chr
- chv
- ckb
- cor
- cos
- crh
- csb
- cym
- dan
- deu
- diq
- div
- dsb
- dty
- egl
- ell
- eng
- epo
- est
- eus
- ext
- fao
- fas
- fin
- fra
- frp
- fry
- fur
- gag
- gla
- gle
- glg
- glk
- glv
- grn
- guj
- hak
- hat
- hau
- hbs
- heb
- hif
- hin
- hrv
- hsb
- hun
- hye
- ibo
- ido
- ile
- ilo
- ina
- ind
- isl
- ita
- jam
- jav
- jbo
- jpn
- kaa
- kab
- kan
- kat
- kaz
- kbd
- khm
- kin
- kir
- koi
- kok
- kom
- kor
- krc
- ksh
- kur
- lad
- lao
- lat
- lav
- lez
- lij
- lim
- lin
- lit
- lmo
- lrc
- ltg
- ltz
- lug
- lzh
- mai
- mal
- mar
- mdf
- mhr
- min
- mkd
- mlg
- mlt
- nan
- mon
- mri
- mrj
- msa
- mwl
- mya
- myv
- mzn
- nap
- nav
- nci
- nds
- nep
- new
- nld
- nno
- nob
- nrm
- nso
- oci
- olo
- ori
- orm
- oss
- pag
- pam
- pan
- pap
- pcd
- pdc
- pfl
- pnb
- pol
- por
- pus
- que
- roh
- ron
- rue
- rup
- rus
- sah
- san
- scn
- sco
- sgs
- sin
- slk
- slv
- sme
- sna
- snd
- som
- spa
- sqi
- srd
- srn
- srp
- stq
- sun
- swa
- swe
- szl
- tam
- tat
- tcy
- tel
- tet
- tgk
- tgl
- tha
- ton
- tsn
- tuk
- tur
- tyv
- udm
- uig
- ukr
- urd
- uzb
- vec
- vep
- vie
- vls
- vol
- vro
- war
- wln
- wol
- wuu
- xho
- xmf
- yid
- yor
- zea
- zho
language_bcp47:
- be-tarask
- map-bms
- nds-nl
- roa-tara
- zh-yue
license: apache-2.0
datasets:
- wili_2018
---
# Zabanshenas - Language Detector
Zabanshenas is a Transformer-based solution for identifying the most likely language of a written document/text. Zabanshenas is a Persian word that has two meanings:
- A person who studies linguistics.
- A way to identify the type of written language.
## How to use
Follow [Zabanshenas repo](https://github.com/m3hrdadfi/zabanshenas) for more information!
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
### By Paragraph
| language | precision | recall | f1-score |
|:--------------------------------------:|:---------:|:--------:|:--------:|
| Achinese (ace) | 1.000000 | 0.982143 | 0.990991 |
| Afrikaans (afr) | 1.000000 | 1.000000 | 1.000000 |
| Alemannic German (als) | 1.000000 | 0.946429 | 0.972477 |
| Amharic (amh) | 1.000000 | 0.982143 | 0.990991 |
| Old English (ang) | 0.981818 | 0.964286 | 0.972973 |
| Arabic (ara) | 0.846154 | 0.982143 | 0.909091 |
| Aragonese (arg) | 1.000000 | 1.000000 | 1.000000 |
| Egyptian Arabic (arz) | 0.979592 | 0.857143 | 0.914286 |
| Assamese (asm) | 0.981818 | 0.964286 | 0.972973 |
| Asturian (ast) | 0.964912 | 0.982143 | 0.973451 |
| Avar (ava) | 0.941176 | 0.905660 | 0.923077 |
| Aymara (aym) | 0.964912 | 0.982143 | 0.973451 |
| South Azerbaijani (azb) | 0.965517 | 1.000000 | 0.982456 |
| Azerbaijani (aze) | 1.000000 | 1.000000 | 1.000000 |
| Bashkir (bak) | 1.000000 | 0.978261 | 0.989011 |
| Bavarian (bar) | 0.843750 | 0.964286 | 0.900000 |
| Central Bikol (bcl) | 1.000000 | 0.982143 | 0.990991 |
| Belarusian (Taraschkewiza) (be-tarask) | 1.000000 | 0.875000 | 0.933333 |
| Belarusian (bel) | 0.870968 | 0.964286 | 0.915254 |
| Bengali (ben) | 0.982143 | 0.982143 | 0.982143 |
| Bhojpuri (bho) | 1.000000 | 0.928571 | 0.962963 |
| Banjar (bjn) | 0.981132 | 0.945455 | 0.962963 |
| Tibetan (bod) | 1.000000 | 0.982143 | 0.990991 |
| Bosnian (bos) | 0.552632 | 0.375000 | 0.446809 |
| Bishnupriya (bpy) | 1.000000 | 0.982143 | 0.990991 |
| Breton (bre) | 1.000000 | 0.964286 | 0.981818 |
| Bulgarian (bul) | 1.000000 | 0.964286 | 0.981818 |
| Buryat (bxr) | 0.946429 | 0.946429 | 0.946429 |
| Catalan (cat) | 0.982143 | 0.982143 | 0.982143 |
| Chavacano (cbk) | 0.914894 | 0.767857 | 0.834951 |
| Min Dong (cdo) | 1.000000 | 0.982143 | 0.990991 |
| Cebuano (ceb) | 1.000000 | 1.000000 | 1.000000 |
| Czech (ces) | 1.000000 | 1.000000 | 1.000000 |
| Chechen (che) | 1.000000 | 1.000000 | 1.000000 |
| Cherokee (chr) | 1.000000 | 0.963636 | 0.981481 |
| Chuvash (chv) | 0.938776 | 0.958333 | 0.948454 |
| Central Kurdish (ckb) | 1.000000 | 1.000000 | 1.000000 |
| Cornish (cor) | 1.000000 | 1.000000 | 1.000000 |
| Corsican (cos) | 1.000000 | 0.982143 | 0.990991 |
| Crimean Tatar (crh) | 1.000000 | 0.946429 | 0.972477 |
| Kashubian (csb) | 1.000000 | 0.963636 | 0.981481 |
| Welsh (cym) | 1.000000 | 1.000000 | 1.000000 |
| Danish (dan) | 1.000000 | 1.000000 | 1.000000 |
| German (deu) | 0.828125 | 0.946429 | 0.883333 |
| Dimli (diq) | 0.964912 | 0.982143 | 0.973451 |
| Dhivehi (div) | 1.000000 | 1.000000 | 1.000000 |
| Lower Sorbian (dsb) | 1.000000 | 0.982143 | 0.990991 |
| Doteli (dty) | 0.940000 | 0.854545 | 0.895238 |
| Emilian (egl) | 1.000000 | 0.928571 | 0.962963 |
| Modern Greek (ell) | 1.000000 | 1.000000 | 1.000000 |
| English (eng) | 0.588889 | 0.946429 | 0.726027 |
| Esperanto (epo) | 1.000000 | 0.982143 | 0.990991 |
| Estonian (est) | 0.963636 | 0.946429 | 0.954955 |
| Basque (eus) | 1.000000 | 0.982143 | 0.990991 |
| Extremaduran (ext) | 0.982143 | 0.982143 | 0.982143 |
| Faroese (fao) | 1.000000 | 1.000000 | 1.000000 |
| Persian (fas) | 0.948276 | 0.982143 | 0.964912 |
| Finnish (fin) | 1.000000 | 1.000000 | 1.000000 |
| French (fra) | 0.710145 | 0.875000 | 0.784000 |
| Arpitan (frp) | 1.000000 | 0.946429 | 0.972477 |
| Western Frisian (fry) | 0.982143 | 0.982143 | 0.982143 |
| Friulian (fur) | 1.000000 | 0.982143 | 0.990991 |
| Gagauz (gag) | 0.981132 | 0.945455 | 0.962963 |
| Scottish Gaelic (gla) | 0.982143 | 0.982143 | 0.982143 |
| Irish (gle) | 0.949153 | 1.000000 | 0.973913 |
| Galician (glg) | 1.000000 | 1.000000 | 1.000000 |
| Gilaki (glk) | 0.981132 | 0.945455 | 0.962963 |
| Manx (glv) | 1.000000 | 1.000000 | 1.000000 |
| Guarani (grn) | 1.000000 | 0.964286 | 0.981818 |
| Gujarati (guj) | 1.000000 | 0.982143 | 0.990991 |
| Hakka Chinese (hak) | 0.981818 | 0.964286 | 0.972973 |
| Haitian Creole (hat) | 1.000000 | 1.000000 | 1.000000 |
| Hausa (hau) | 1.000000 | 0.945455 | 0.971963 |
| Serbo-Croatian (hbs) | 0.448276 | 0.464286 | 0.456140 |
| Hebrew (heb) | 1.000000 | 0.982143 | 0.990991 |
| Fiji Hindi (hif) | 0.890909 | 0.890909 | 0.890909 |
| Hindi (hin) | 0.981481 | 0.946429 | 0.963636 |
| Croatian (hrv) | 0.500000 | 0.636364 | 0.560000 |
| Upper Sorbian (hsb) | 0.955556 | 1.000000 | 0.977273 |
| Hungarian (hun) | 1.000000 | 1.000000 | 1.000000 |
| Armenian (hye) | 1.000000 | 0.981818 | 0.990826 |
| Igbo (ibo) | 0.918033 | 1.000000 | 0.957265 |
| Ido (ido) | 1.000000 | 1.000000 | 1.000000 |
| Interlingue (ile) | 1.000000 | 0.962264 | 0.980769 |
| Iloko (ilo) | 0.947368 | 0.964286 | 0.955752 |
| Interlingua (ina) | 1.000000 | 1.000000 | 1.000000 |
| Indonesian (ind) | 0.761905 | 0.872727 | 0.813559 |
| Icelandic (isl) | 1.000000 | 1.000000 | 1.000000 |
| Italian (ita) | 0.861538 | 1.000000 | 0.925620 |
| Jamaican Patois (jam) | 1.000000 | 0.946429 | 0.972477 |
| Javanese (jav) | 0.964912 | 0.982143 | 0.973451 |
| Lojban (jbo) | 1.000000 | 1.000000 | 1.000000 |
| Japanese (jpn) | 1.000000 | 1.000000 | 1.000000 |
| Karakalpak (kaa) | 0.965517 | 1.000000 | 0.982456 |
| Kabyle (kab) | 1.000000 | 0.964286 | 0.981818 |
| Kannada (kan) | 0.982143 | 0.982143 | 0.982143 |
| Georgian (kat) | 1.000000 | 0.964286 | 0.981818 |
| Kazakh (kaz) | 0.980769 | 0.980769 | 0.980769 |
| Kabardian (kbd) | 1.000000 | 0.982143 | 0.990991 |
| Central Khmer (khm) | 0.960784 | 0.875000 | 0.915888 |
| Kinyarwanda (kin) | 0.981132 | 0.928571 | 0.954128 |
| Kirghiz (kir) | 1.000000 | 1.000000 | 1.000000 |
| Komi-Permyak (koi) | 0.962264 | 0.910714 | 0.935780 |
| Konkani (kok) | 0.964286 | 0.981818 | 0.972973 |
| Komi (kom) | 1.000000 | 0.962264 | 0.980769 |
| Korean (kor) | 1.000000 | 1.000000 | 1.000000 |
| Karachay-Balkar (krc) | 1.000000 | 0.982143 | 0.990991 |
| Ripuarisch (ksh) | 1.000000 | 0.964286 | 0.981818 |
| Kurdish (kur) | 1.000000 | 0.964286 | 0.981818 |
| Ladino (lad) | 1.000000 | 1.000000 | 1.000000 |
| Lao (lao) | 0.961538 | 0.909091 | 0.934579 |
| Latin (lat) | 0.877193 | 0.943396 | 0.909091 |
| Latvian (lav) | 0.963636 | 0.946429 | 0.954955 |
| Lezghian (lez) | 1.000000 | 0.964286 | 0.981818 |
| Ligurian (lij) | 1.000000 | 0.964286 | 0.981818 |
| Limburgan (lim) | 0.938776 | 1.000000 | 0.968421 |
| Lingala (lin) | 0.980769 | 0.927273 | 0.953271 |
| Lithuanian (lit) | 0.982456 | 1.000000 | 0.991150 |
| Lombard (lmo) | 1.000000 | 1.000000 | 1.000000 |
| Northern Luri (lrc) | 1.000000 | 0.928571 | 0.962963 |
| Latgalian (ltg) | 1.000000 | 0.982143 | 0.990991 |
| Luxembourgish (ltz) | 0.949153 | 1.000000 | 0.973913 |
| Luganda (lug) | 1.000000 | 1.000000 | 1.000000 |
| Literary Chinese (lzh) | 1.000000 | 1.000000 | 1.000000 |
| Maithili (mai) | 0.931034 | 0.964286 | 0.947368 |
| Malayalam (mal) | 1.000000 | 0.982143 | 0.990991 |
| Banyumasan (map-bms) | 0.977778 | 0.785714 | 0.871287 |
| Marathi (mar) | 0.949153 | 1.000000 | 0.973913 |
| Moksha (mdf) | 0.980000 | 0.890909 | 0.933333 |
| Eastern Mari (mhr) | 0.981818 | 0.964286 | 0.972973 |
| Minangkabau (min) | 1.000000 | 1.000000 | 1.000000 |
| Macedonian (mkd) | 1.000000 | 0.981818 | 0.990826 |
| Malagasy (mlg) | 0.981132 | 1.000000 | 0.990476 |
| Maltese (mlt) | 0.982456 | 1.000000 | 0.991150 |
| Min Nan Chinese (nan) | 1.000000 | 1.000000 | 1.000000 |
| Mongolian (mon) | 1.000000 | 0.981818 | 0.990826 |
| Maori (mri) | 1.000000 | 1.000000 | 1.000000 |
| Western Mari (mrj) | 0.982456 | 1.000000 | 0.991150 |
| Malay (msa) | 0.862069 | 0.892857 | 0.877193 |
| Mirandese (mwl) | 1.000000 | 0.982143 | 0.990991 |
| Burmese (mya) | 1.000000 | 1.000000 | 1.000000 |
| Erzya (myv) | 0.818182 | 0.964286 | 0.885246 |
| Mazanderani (mzn) | 0.981481 | 1.000000 | 0.990654 |
| Neapolitan (nap) | 1.000000 | 0.981818 | 0.990826 |
| Navajo (nav) | 1.000000 | 1.000000 | 1.000000 |
| Classical Nahuatl (nci) | 0.981481 | 0.946429 | 0.963636 |
| Low German (nds) | 0.982143 | 0.982143 | 0.982143 |
| West Low German (nds-nl) | 1.000000 | 1.000000 | 1.000000 |
| Nepali (macrolanguage) (nep) | 0.881356 | 0.928571 | 0.904348 |
| Newari (new) | 1.000000 | 0.909091 | 0.952381 |
| Dutch (nld) | 0.982143 | 0.982143 | 0.982143 |
| Norwegian Nynorsk (nno) | 1.000000 | 1.000000 | 1.000000 |
| Bokmål (nob) | 1.000000 | 1.000000 | 1.000000 |
| Narom (nrm) | 0.981818 | 0.964286 | 0.972973 |
| Northern Sotho (nso) | 1.000000 | 1.000000 | 1.000000 |
| Occitan (oci) | 0.903846 | 0.839286 | 0.870370 |
| Livvi-Karelian (olo) | 0.982456 | 1.000000 | 0.991150 |
| Oriya (ori) | 0.964912 | 0.982143 | 0.973451 |
| Oromo (orm) | 0.982143 | 0.982143 | 0.982143 |
| Ossetian (oss) | 0.982143 | 1.000000 | 0.990991 |
| Pangasinan (pag) | 0.980000 | 0.875000 | 0.924528 |
| Pampanga (pam) | 0.928571 | 0.896552 | 0.912281 |
| Panjabi (pan) | 1.000000 | 1.000000 | 1.000000 |
| Papiamento (pap) | 1.000000 | 0.964286 | 0.981818 |
| Picard (pcd) | 0.849057 | 0.849057 | 0.849057 |
| Pennsylvania German (pdc) | 0.854839 | 0.946429 | 0.898305 |
| Palatine German (pfl) | 0.946429 | 0.946429 | 0.946429 |
| Western Panjabi (pnb) | 0.981132 | 0.962963 | 0.971963 |
| Polish (pol) | 0.933333 | 1.000000 | 0.965517 |
| Portuguese (por) | 0.774648 | 0.982143 | 0.866142 |
| Pushto (pus) | 1.000000 | 0.910714 | 0.953271 |
| Quechua (que) | 0.962963 | 0.928571 | 0.945455 |
| Tarantino dialect (roa-tara) | 1.000000 | 0.964286 | 0.981818 |
| Romansh (roh) | 1.000000 | 0.928571 | 0.962963 |
| Romanian (ron) | 0.965517 | 1.000000 | 0.982456 |
| Rusyn (rue) | 0.946429 | 0.946429 | 0.946429 |
| Aromanian (rup) | 0.962963 | 0.928571 | 0.945455 |
| Russian (rus) | 0.859375 | 0.982143 | 0.916667 |
| Yakut (sah) | 1.000000 | 0.982143 | 0.990991 |
| Sanskrit (san) | 0.982143 | 0.982143 | 0.982143 |
| Sicilian (scn) | 1.000000 | 1.000000 | 1.000000 |
| Scots (sco) | 0.982143 | 0.982143 | 0.982143 |
| Samogitian (sgs) | 1.000000 | 0.982143 | 0.990991 |
| Sinhala (sin) | 0.964912 | 0.982143 | 0.973451 |
| Slovak (slk) | 1.000000 | 0.982143 | 0.990991 |
| Slovene (slv) | 1.000000 | 0.981818 | 0.990826 |
| Northern Sami (sme) | 0.962264 | 0.962264 | 0.962264 |
| Shona (sna) | 0.933333 | 1.000000 | 0.965517 |
| Sindhi (snd) | 1.000000 | 1.000000 | 1.000000 |
| Somali (som) | 0.948276 | 1.000000 | 0.973451 |
| Spanish (spa) | 0.739130 | 0.910714 | 0.816000 |
| Albanian (sqi) | 0.982143 | 0.982143 | 0.982143 |
| Sardinian (srd) | 1.000000 | 0.982143 | 0.990991 |
| Sranan (srn) | 1.000000 | 1.000000 | 1.000000 |
| Serbian (srp) | 1.000000 | 0.946429 | 0.972477 |
| Saterfriesisch (stq) | 1.000000 | 0.964286 | 0.981818 |
| Sundanese (sun) | 1.000000 | 0.977273 | 0.988506 |
| Swahili (macrolanguage) (swa) | 1.000000 | 1.000000 | 1.000000 |
| Swedish (swe) | 1.000000 | 1.000000 | 1.000000 |
| Silesian (szl) | 1.000000 | 0.981481 | 0.990654 |
| Tamil (tam) | 0.982143 | 1.000000 | 0.990991 |
| Tatar (tat) | 1.000000 | 1.000000 | 1.000000 |
| Tulu (tcy) | 0.982456 | 1.000000 | 0.991150 |
| Telugu (tel) | 1.000000 | 0.920000 | 0.958333 |
| Tetum (tet) | 1.000000 | 0.964286 | 0.981818 |
| Tajik (tgk) | 1.000000 | 1.000000 | 1.000000 |
| Tagalog (tgl) | 1.000000 | 1.000000 | 1.000000 |
| Thai (tha) | 0.932203 | 0.982143 | 0.956522 |
| Tongan (ton) | 1.000000 | 0.964286 | 0.981818 |
| Tswana (tsn) | 1.000000 | 1.000000 | 1.000000 |
| Turkmen (tuk) | 1.000000 | 0.982143 | 0.990991 |
| Turkish (tur) | 0.901639 | 0.982143 | 0.940171 |
| Tuvan (tyv) | 1.000000 | 0.964286 | 0.981818 |
| Udmurt (udm) | 1.000000 | 0.982143 | 0.990991 |
| Uighur (uig) | 1.000000 | 0.982143 | 0.990991 |
| Ukrainian (ukr) | 0.963636 | 0.946429 | 0.954955 |
| Urdu (urd) | 1.000000 | 0.982143 | 0.990991 |
| Uzbek (uzb) | 1.000000 | 1.000000 | 1.000000 |
| Venetian (vec) | 1.000000 | 0.982143 | 0.990991 |
| Veps (vep) | 0.982456 | 1.000000 | 0.991150 |
| Vietnamese (vie) | 0.964912 | 0.982143 | 0.973451 |
| Vlaams (vls) | 1.000000 | 0.982143 | 0.990991 |
| Volapük (vol) | 1.000000 | 1.000000 | 1.000000 |
| Võro (vro) | 0.964286 | 0.964286 | 0.964286 |
| Waray (war) | 1.000000 | 0.982143 | 0.990991 |
| Walloon (wln) | 1.000000 | 1.000000 | 1.000000 |
| Wolof (wol) | 0.981481 | 0.963636 | 0.972477 |
| Wu Chinese (wuu) | 0.981481 | 0.946429 | 0.963636 |
| Xhosa (xho) | 1.000000 | 0.964286 | 0.981818 |
| Mingrelian (xmf) | 1.000000 | 0.964286 | 0.981818 |
| Yiddish (yid) | 1.000000 | 1.000000 | 1.000000 |
| Yoruba (yor) | 0.964912 | 0.982143 | 0.973451 |
| Zeeuws (zea) | 1.000000 | 0.982143 | 0.990991 |
| Cantonese (zh-yue) | 0.981481 | 0.946429 | 0.963636 |
| Standard Chinese (zho) | 0.932203 | 0.982143 | 0.956522 |
| accuracy | 0.963055 | 0.963055 | 0.963055 |
| macro avg | 0.966424 | 0.963216 | 0.963891 |
| weighted avg | 0.966040 | 0.963055 | 0.963606 |
### By Sentence
| language | precision | recall | f1-score |
|:--------------------------------------:|:---------:|:--------:|:--------:|
| Achinese (ace) | 0.754545 | 0.873684 | 0.809756 |
| Afrikaans (afr) | 0.708955 | 0.940594 | 0.808511 |
| Alemannic German (als) | 0.870130 | 0.752809 | 0.807229 |
| Amharic (amh) | 1.000000 | 0.820000 | 0.901099 |
| Old English (ang) | 0.966667 | 0.906250 | 0.935484 |
| Arabic (ara) | 0.907692 | 0.967213 | 0.936508 |
| Aragonese (arg) | 0.921569 | 0.959184 | 0.940000 |
| Egyptian Arabic (arz) | 0.964286 | 0.843750 | 0.900000 |
| Assamese (asm) | 0.964286 | 0.870968 | 0.915254 |
| Asturian (ast) | 0.880000 | 0.795181 | 0.835443 |
| Avar (ava) | 0.864198 | 0.843373 | 0.853659 |
| Aymara (aym) | 1.000000 | 0.901961 | 0.948454 |
| South Azerbaijani (azb) | 0.979381 | 0.989583 | 0.984456 |
| Azerbaijani (aze) | 0.989899 | 0.960784 | 0.975124 |
| Bashkir (bak) | 0.837209 | 0.857143 | 0.847059 |
| Bavarian (bar) | 0.741935 | 0.766667 | 0.754098 |
| Central Bikol (bcl) | 0.962963 | 0.928571 | 0.945455 |
| Belarusian (Taraschkewiza) (be-tarask) | 0.857143 | 0.733333 | 0.790419 |
| Belarusian (bel) | 0.775510 | 0.752475 | 0.763819 |
| Bengali (ben) | 0.861111 | 0.911765 | 0.885714 |
| Bhojpuri (bho) | 0.965517 | 0.933333 | 0.949153 |
| Banjar (bjn) | 0.891566 | 0.880952 | 0.886228 |
| Tibetan (bod) | 1.000000 | 1.000000 | 1.000000 |
| Bosnian (bos) | 0.375000 | 0.323077 | 0.347107 |
| Bishnupriya (bpy) | 0.986301 | 1.000000 | 0.993103 |
| Breton (bre) | 0.951613 | 0.893939 | 0.921875 |
| Bulgarian (bul) | 0.945055 | 0.877551 | 0.910053 |
| Buryat (bxr) | 0.955556 | 0.843137 | 0.895833 |
| Catalan (cat) | 0.692308 | 0.750000 | 0.720000 |
| Chavacano (cbk) | 0.842857 | 0.641304 | 0.728395 |
| Min Dong (cdo) | 0.972973 | 1.000000 | 0.986301 |
| Cebuano (ceb) | 0.981308 | 0.954545 | 0.967742 |
| Czech (ces) | 0.944444 | 0.915385 | 0.929687 |
| Chechen (che) | 0.875000 | 0.700000 | 0.777778 |
| Cherokee (chr) | 1.000000 | 0.970588 | 0.985075 |
| Chuvash (chv) | 0.875000 | 0.836957 | 0.855556 |
| Central Kurdish (ckb) | 1.000000 | 0.983051 | 0.991453 |
| Cornish (cor) | 0.979592 | 0.969697 | 0.974619 |
| Corsican (cos) | 0.986842 | 0.925926 | 0.955414 |
| Crimean Tatar (crh) | 0.958333 | 0.907895 | 0.932432 |
| Kashubian (csb) | 0.920354 | 0.904348 | 0.912281 |
| Welsh (cym) | 0.971014 | 0.943662 | 0.957143 |
| Danish (dan) | 0.865169 | 0.777778 | 0.819149 |
| German (deu) | 0.721311 | 0.822430 | 0.768559 |
| Dimli (diq) | 0.915966 | 0.923729 | 0.919831 |
| Dhivehi (div) | 1.000000 | 0.991228 | 0.995595 |
| Lower Sorbian (dsb) | 0.898876 | 0.879121 | 0.888889 |
| Doteli (dty) | 0.821429 | 0.638889 | 0.718750 |
| Emilian (egl) | 0.988095 | 0.922222 | 0.954023 |
| Modern Greek (ell) | 0.988636 | 0.966667 | 0.977528 |
| English (eng) | 0.522727 | 0.784091 | 0.627273 |
| Esperanto (epo) | 0.963855 | 0.930233 | 0.946746 |
| Estonian (est) | 0.922222 | 0.873684 | 0.897297 |
| Basque (eus) | 1.000000 | 0.941176 | 0.969697 |
| Extremaduran (ext) | 0.925373 | 0.885714 | 0.905109 |
| Faroese (fao) | 0.855072 | 0.887218 | 0.870849 |
| Persian (fas) | 0.879630 | 0.979381 | 0.926829 |
| Finnish (fin) | 0.952830 | 0.943925 | 0.948357 |
| French (fra) | 0.676768 | 0.943662 | 0.788235 |
| Arpitan (frp) | 0.867925 | 0.807018 | 0.836364 |
| Western Frisian (fry) | 0.956989 | 0.890000 | 0.922280 |
| Friulian (fur) | 1.000000 | 0.857143 | 0.923077 |
| Gagauz (gag) | 0.939024 | 0.802083 | 0.865169 |
| Scottish Gaelic (gla) | 1.000000 | 0.879121 | 0.935673 |
| Irish (gle) | 0.989247 | 0.958333 | 0.973545 |
| Galician (glg) | 0.910256 | 0.922078 | 0.916129 |
| Gilaki (glk) | 0.964706 | 0.872340 | 0.916201 |
| Manx (glv) | 1.000000 | 0.965517 | 0.982456 |
| Guarani (grn) | 0.983333 | 1.000000 | 0.991597 |
| Gujarati (guj) | 1.000000 | 0.991525 | 0.995745 |
| Hakka Chinese (hak) | 0.955224 | 0.955224 | 0.955224 |
| Haitian Creole (hat) | 0.833333 | 0.666667 | 0.740741 |
| Hausa (hau) | 0.936709 | 0.913580 | 0.925000 |
| Serbo-Croatian (hbs) | 0.452830 | 0.410256 | 0.430493 |
| Hebrew (heb) | 0.988235 | 0.976744 | 0.982456 |
| Fiji Hindi (hif) | 0.936709 | 0.840909 | 0.886228 |
| Hindi (hin) | 0.965517 | 0.756757 | 0.848485 |
| Croatian (hrv) | 0.443820 | 0.537415 | 0.486154 |
| Upper Sorbian (hsb) | 0.951613 | 0.830986 | 0.887218 |
| Hungarian (hun) | 0.854701 | 0.909091 | 0.881057 |
| Armenian (hye) | 1.000000 | 0.816327 | 0.898876 |
| Igbo (ibo) | 0.974359 | 0.926829 | 0.950000 |
| Ido (ido) | 0.975000 | 0.987342 | 0.981132 |
| Interlingue (ile) | 0.880597 | 0.921875 | 0.900763 |
| Iloko (ilo) | 0.882353 | 0.821918 | 0.851064 |
| Interlingua (ina) | 0.952381 | 0.895522 | 0.923077 |
| Indonesian (ind) | 0.606383 | 0.695122 | 0.647727 |
| Icelandic (isl) | 0.978261 | 0.882353 | 0.927835 |
| Italian (ita) | 0.910448 | 0.910448 | 0.910448 |
| Jamaican Patois (jam) | 0.988764 | 0.967033 | 0.977778 |
| Javanese (jav) | 0.903614 | 0.862069 | 0.882353 |
| Lojban (jbo) | 0.943878 | 0.929648 | 0.936709 |
| Japanese (jpn) | 1.000000 | 0.764706 | 0.866667 |
| Karakalpak (kaa) | 0.940171 | 0.901639 | 0.920502 |
| Kabyle (kab) | 0.985294 | 0.837500 | 0.905405 |
| Kannada (kan) | 0.975806 | 0.975806 | 0.975806 |
| Georgian (kat) | 0.953704 | 0.903509 | 0.927928 |
| Kazakh (kaz) | 0.934579 | 0.877193 | 0.904977 |
| Kabardian (kbd) | 0.987952 | 0.953488 | 0.970414 |
| Central Khmer (khm) | 0.928571 | 0.829787 | 0.876404 |
| Kinyarwanda (kin) | 0.953125 | 0.938462 | 0.945736 |
| Kirghiz (kir) | 0.927632 | 0.881250 | 0.903846 |
| Komi-Permyak (koi) | 0.750000 | 0.776786 | 0.763158 |
| Konkani (kok) | 0.893491 | 0.872832 | 0.883041 |
| Komi (kom) | 0.734177 | 0.690476 | 0.711656 |
| Korean (kor) | 0.989899 | 0.989899 | 0.989899 |
| Karachay-Balkar (krc) | 0.928571 | 0.917647 | 0.923077 |
| Ripuarisch (ksh) | 0.915789 | 0.896907 | 0.906250 |
| Kurdish (kur) | 0.977528 | 0.935484 | 0.956044 |
| Ladino (lad) | 0.985075 | 0.904110 | 0.942857 |
| Lao (lao) | 0.896552 | 0.812500 | 0.852459 |
| Latin (lat) | 0.741935 | 0.831325 | 0.784091 |
| Latvian (lav) | 0.710526 | 0.878049 | 0.785455 |
| Lezghian (lez) | 0.975309 | 0.877778 | 0.923977 |
| Ligurian (lij) | 0.951807 | 0.897727 | 0.923977 |
| Limburgan (lim) | 0.909091 | 0.921053 | 0.915033 |
| Lingala (lin) | 0.942857 | 0.814815 | 0.874172 |
| Lithuanian (lit) | 0.892857 | 0.925926 | 0.909091 |
| Lombard (lmo) | 0.766234 | 0.951613 | 0.848921 |
| Northern Luri (lrc) | 0.972222 | 0.875000 | 0.921053 |
| Latgalian (ltg) | 0.895349 | 0.865169 | 0.880000 |
| Luxembourgish (ltz) | 0.882353 | 0.750000 | 0.810811 |
| Luganda (lug) | 0.946429 | 0.883333 | 0.913793 |
| Literary Chinese (lzh) | 1.000000 | 1.000000 | 1.000000 |
| Maithili (mai) | 0.893617 | 0.823529 | 0.857143 |
| Malayalam (mal) | 1.000000 | 0.975000 | 0.987342 |
| Banyumasan (map-bms) | 0.924242 | 0.772152 | 0.841379 |
| Marathi (mar) | 0.874126 | 0.919118 | 0.896057 |
| Moksha (mdf) | 0.771242 | 0.830986 | 0.800000 |
| Eastern Mari (mhr) | 0.820000 | 0.860140 | 0.839590 |
| Minangkabau (min) | 0.973684 | 0.973684 | 0.973684 |
| Macedonian (mkd) | 0.895652 | 0.953704 | 0.923767 |
| Malagasy (mlg) | 1.000000 | 0.966102 | 0.982759 |
| Maltese (mlt) | 0.987952 | 0.964706 | 0.976190 |
| Min Nan Chinese (nan) | 0.975000 | 1.000000 | 0.987342 |
| Mongolian (mon) | 0.954545 | 0.933333 | 0.943820 |
| Maori (mri) | 0.985294 | 1.000000 | 0.992593 |
| Western Mari (mrj) | 0.966292 | 0.914894 | 0.939891 |
| Malay (msa) | 0.770270 | 0.695122 | 0.730769 |
| Mirandese (mwl) | 0.970588 | 0.891892 | 0.929577 |
| Burmese (mya) | 1.000000 | 0.964286 | 0.981818 |
| Erzya (myv) | 0.535714 | 0.681818 | 0.600000 |
| Mazanderani (mzn) | 0.968750 | 0.898551 | 0.932331 |
| Neapolitan (nap) | 0.892308 | 0.865672 | 0.878788 |
| Navajo (nav) | 0.984375 | 0.984375 | 0.984375 |
| Classical Nahuatl (nci) | 0.901408 | 0.761905 | 0.825806 |
| Low German (nds) | 0.896226 | 0.913462 | 0.904762 |
| West Low German (nds-nl) | 0.873563 | 0.835165 | 0.853933 |
| Nepali (macrolanguage) (nep) | 0.704545 | 0.861111 | 0.775000 |
| Newari (new) | 0.920000 | 0.741935 | 0.821429 |
| Dutch (nld) | 0.925926 | 0.872093 | 0.898204 |
| Norwegian Nynorsk (nno) | 0.847059 | 0.808989 | 0.827586 |
| Bokmål (nob) | 0.861386 | 0.852941 | 0.857143 |
| Narom (nrm) | 0.966667 | 0.983051 | 0.974790 |
| Northern Sotho (nso) | 0.897436 | 0.921053 | 0.909091 |
| Occitan (oci) | 0.958333 | 0.696970 | 0.807018 |
| Livvi-Karelian (olo) | 0.967742 | 0.937500 | 0.952381 |
| Oriya (ori) | 0.933333 | 1.000000 | 0.965517 |
| Oromo (orm) | 0.977528 | 0.915789 | 0.945652 |
| Ossetian (oss) | 0.958333 | 0.841463 | 0.896104 |
| Pangasinan (pag) | 0.847328 | 0.909836 | 0.877470 |
| Pampanga (pam) | 0.969697 | 0.780488 | 0.864865 |
| Panjabi (pan) | 1.000000 | 1.000000 | 1.000000 |
| Papiamento (pap) | 0.876190 | 0.920000 | 0.897561 |
| Picard (pcd) | 0.707317 | 0.568627 | 0.630435 |
| Pennsylvania German (pdc) | 0.827273 | 0.827273 | 0.827273 |
| Palatine German (pfl) | 0.882353 | 0.914634 | 0.898204 |
| Western Panjabi (pnb) | 0.964286 | 0.931034 | 0.947368 |
| Polish (pol) | 0.859813 | 0.910891 | 0.884615 |
| Portuguese (por) | 0.535714 | 0.833333 | 0.652174 |
| Pushto (pus) | 0.989362 | 0.902913 | 0.944162 |
| Quechua (que) | 0.979167 | 0.903846 | 0.940000 |
| Tarantino dialect (roa-tara) | 0.964912 | 0.901639 | 0.932203 |
| Romansh (roh) | 0.914894 | 0.895833 | 0.905263 |
| Romanian (ron) | 0.880597 | 0.880597 | 0.880597 |
| Rusyn (rue) | 0.932584 | 0.805825 | 0.864583 |
| Aromanian (rup) | 0.783333 | 0.758065 | 0.770492 |
| Russian (rus) | 0.517986 | 0.765957 | 0.618026 |
| Yakut (sah) | 0.954023 | 0.922222 | 0.937853 |
| Sanskrit (san) | 0.866667 | 0.951220 | 0.906977 |
| Sicilian (scn) | 0.984375 | 0.940299 | 0.961832 |
| Scots (sco) | 0.851351 | 0.900000 | 0.875000 |
| Samogitian (sgs) | 0.977011 | 0.876289 | 0.923913 |
| Sinhala (sin) | 0.406154 | 0.985075 | 0.575163 |
| Slovak (slk) | 0.956989 | 0.872549 | 0.912821 |
| Slovene (slv) | 0.907216 | 0.854369 | 0.880000 |
| Northern Sami (sme) | 0.949367 | 0.892857 | 0.920245 |
| Shona (sna) | 0.936508 | 0.855072 | 0.893939 |
| Sindhi (snd) | 0.984962 | 0.992424 | 0.988679 |
| Somali (som) | 0.949153 | 0.848485 | 0.896000 |
| Spanish (spa) | 0.584158 | 0.746835 | 0.655556 |
| Albanian (sqi) | 0.988095 | 0.912088 | 0.948571 |
| Sardinian (srd) | 0.957746 | 0.931507 | 0.944444 |
| Sranan (srn) | 0.985714 | 0.945205 | 0.965035 |
| Serbian (srp) | 0.950980 | 0.889908 | 0.919431 |
| Saterfriesisch (stq) | 0.962500 | 0.875000 | 0.916667 |
| Sundanese (sun) | 0.778846 | 0.910112 | 0.839378 |
| Swahili (macrolanguage) (swa) | 0.915493 | 0.878378 | 0.896552 |
| Swedish (swe) | 0.989247 | 0.958333 | 0.973545 |
| Silesian (szl) | 0.944444 | 0.904255 | 0.923913 |
| Tamil (tam) | 0.990000 | 0.970588 | 0.980198 |
| Tatar (tat) | 0.942029 | 0.902778 | 0.921986 |
| Tulu (tcy) | 0.980519 | 0.967949 | 0.974194 |
| Telugu (tel) | 0.965986 | 0.965986 | 0.965986 |
| Tetum (tet) | 0.898734 | 0.855422 | 0.876543 |
| Tajik (tgk) | 0.974684 | 0.939024 | 0.956522 |
| Tagalog (tgl) | 0.965909 | 0.934066 | 0.949721 |
| Thai (tha) | 0.923077 | 0.882353 | 0.902256 |
| Tongan (ton) | 0.970149 | 0.890411 | 0.928571 |
| Tswana (tsn) | 0.888889 | 0.926316 | 0.907216 |
| Turkmen (tuk) | 0.968000 | 0.889706 | 0.927203 |
| Turkish (tur) | 0.871287 | 0.926316 | 0.897959 |
| Tuvan (tyv) | 0.948454 | 0.859813 | 0.901961 |
| Udmurt (udm) | 0.989362 | 0.894231 | 0.939394 |
| Uighur (uig) | 1.000000 | 0.953333 | 0.976109 |
| Ukrainian (ukr) | 0.893617 | 0.875000 | 0.884211 |
| Urdu (urd) | 1.000000 | 1.000000 | 1.000000 |
| Uzbek (uzb) | 0.636042 | 0.886700 | 0.740741 |
| Venetian (vec) | 1.000000 | 0.941176 | 0.969697 |
| Veps (vep) | 0.858586 | 0.965909 | 0.909091 |
| Vietnamese (vie) | 1.000000 | 0.940476 | 0.969325 |
| Vlaams (vls) | 0.885714 | 0.898551 | 0.892086 |
| Volapük (vol) | 0.975309 | 0.975309 | 0.975309 |
| Võro (vro) | 0.855670 | 0.864583 | 0.860104 |
| Waray (war) | 0.972222 | 0.909091 | 0.939597 |
| Walloon (wln) | 0.742138 | 0.893939 | 0.810997 |
| Wolof (wol) | 0.882979 | 0.954023 | 0.917127 |
| Wu Chinese (wuu) | 0.961538 | 0.833333 | 0.892857 |
| Xhosa (xho) | 0.934066 | 0.867347 | 0.899471 |
| Mingrelian (xmf) | 0.958333 | 0.929293 | 0.943590 |
| Yiddish (yid) | 0.984375 | 0.875000 | 0.926471 |
| Yoruba (yor) | 0.868421 | 0.857143 | 0.862745 |
| Zeeuws (zea) | 0.879518 | 0.793478 | 0.834286 |
| Cantonese (zh-yue) | 0.896552 | 0.812500 | 0.852459 |
| Standard Chinese (zho) | 0.906250 | 0.935484 | 0.920635 |
| accuracy | 0.881051 | 0.881051 | 0.881051 |
| macro avg | 0.903245 | 0.880618 | 0.888996 |
| weighted avg | 0.894174 | 0.881051 | 0.884520 |
### By Token (3 to 5)
| language | precision | recall | f1-score |
|:--------------------------------------:|:---------:|:--------:|:--------:|
| Achinese (ace) | 0.873846 | 0.827988 | 0.850299 |
| Afrikaans (afr) | 0.638060 | 0.732334 | 0.681954 |
| Alemannic German (als) | 0.673780 | 0.547030 | 0.603825 |
| Amharic (amh) | 0.997743 | 0.954644 | 0.975717 |
| Old English (ang) | 0.840816 | 0.693603 | 0.760148 |
| Arabic (ara) | 0.768737 | 0.840749 | 0.803132 |
| Aragonese (arg) | 0.493671 | 0.505181 | 0.499360 |
| Egyptian Arabic (arz) | 0.823529 | 0.741935 | 0.780606 |
| Assamese (asm) | 0.948454 | 0.893204 | 0.920000 |
| Asturian (ast) | 0.490000 | 0.508299 | 0.498982 |
| Avar (ava) | 0.813636 | 0.655678 | 0.726166 |
| Aymara (aym) | 0.795833 | 0.779592 | 0.787629 |
| South Azerbaijani (azb) | 0.832836 | 0.863777 | 0.848024 |
| Azerbaijani (aze) | 0.867470 | 0.800000 | 0.832370 |
| Bashkir (bak) | 0.851852 | 0.750000 | 0.797688 |
| Bavarian (bar) | 0.560897 | 0.522388 | 0.540958 |
| Central Bikol (bcl) | 0.708229 | 0.668235 | 0.687651 |
| Belarusian (Taraschkewiza) (be-tarask) | 0.615635 | 0.526462 | 0.567568 |
| Belarusian (bel) | 0.539952 | 0.597855 | 0.567430 |
| Bengali (ben) | 0.830275 | 0.885086 | 0.856805 |
| Bhojpuri (bho) | 0.723118 | 0.691517 | 0.706965 |
| Banjar (bjn) | 0.619586 | 0.726269 | 0.668699 |
| Tibetan (bod) | 0.999537 | 0.991728 | 0.995617 |
| Bosnian (bos) | 0.330849 | 0.403636 | 0.363636 |
| Bishnupriya (bpy) | 0.941634 | 0.949020 | 0.945312 |
| Breton (bre) | 0.772222 | 0.745308 | 0.758527 |
| Bulgarian (bul) | 0.771505 | 0.706897 | 0.737789 |
| Buryat (bxr) | 0.741935 | 0.753149 | 0.747500 |
| Catalan (cat) | 0.528716 | 0.610136 | 0.566516 |
| Chavacano (cbk) | 0.409449 | 0.312625 | 0.354545 |
| Min Dong (cdo) | 0.951264 | 0.936057 | 0.943599 |
| Cebuano (ceb) | 0.888298 | 0.876640 | 0.882431 |
| Czech (ces) | 0.806045 | 0.758294 | 0.781441 |
| Chechen (che) | 0.857143 | 0.600000 | 0.705882 |
| Cherokee (chr) | 0.997840 | 0.952577 | 0.974684 |
| Chuvash (chv) | 0.874346 | 0.776744 | 0.822660 |
| Central Kurdish (ckb) | 0.984848 | 0.953545 | 0.968944 |
| Cornish (cor) | 0.747596 | 0.807792 | 0.776529 |
| Corsican (cos) | 0.673913 | 0.708571 | 0.690808 |
| Crimean Tatar (crh) | 0.498801 | 0.700337 | 0.582633 |
| Kashubian (csb) | 0.797059 | 0.794721 | 0.795888 |
| Welsh (cym) | 0.829609 | 0.841360 | 0.835443 |
| Danish (dan) | 0.649789 | 0.622222 | 0.635707 |
| German (deu) | 0.559406 | 0.763514 | 0.645714 |
| Dimli (diq) | 0.835580 | 0.763547 | 0.797941 |
| Dhivehi (div) | 1.000000 | 0.980645 | 0.990228 |
| Lower Sorbian (dsb) | 0.740484 | 0.694805 | 0.716918 |
| Doteli (dty) | 0.616314 | 0.527132 | 0.568245 |
| Emilian (egl) | 0.822993 | 0.769625 | 0.795414 |
| Modern Greek (ell) | 0.972043 | 0.963753 | 0.967880 |
| English (eng) | 0.260492 | 0.724346 | 0.383183 |
| Esperanto (epo) | 0.766764 | 0.716621 | 0.740845 |
| Estonian (est) | 0.698885 | 0.673835 | 0.686131 |
| Basque (eus) | 0.882716 | 0.841176 | 0.861446 |
| Extremaduran (ext) | 0.570605 | 0.511628 | 0.539510 |
| Faroese (fao) | 0.773987 | 0.784017 | 0.778970 |
| Persian (fas) | 0.709836 | 0.809346 | 0.756332 |
| Finnish (fin) | 0.866261 | 0.796089 | 0.829694 |
| French (fra) | 0.496263 | 0.700422 | 0.580927 |
| Arpitan (frp) | 0.663366 | 0.584302 | 0.621329 |
| Western Frisian (fry) | 0.750000 | 0.756148 | 0.753061 |
| Friulian (fur) | 0.713555 | 0.675545 | 0.694030 |
| Gagauz (gag) | 0.728125 | 0.677326 | 0.701807 |
| Scottish Gaelic (gla) | 0.831601 | 0.817996 | 0.824742 |
| Irish (gle) | 0.868852 | 0.801296 | 0.833708 |
| Galician (glg) | 0.469816 | 0.454315 | 0.461935 |
| Gilaki (glk) | 0.703883 | 0.687204 | 0.695444 |
| Manx (glv) | 0.873047 | 0.886905 | 0.879921 |
| Guarani (grn) | 0.848580 | 0.793510 | 0.820122 |
| Gujarati (guj) | 0.995643 | 0.926978 | 0.960084 |
| Hakka Chinese (hak) | 0.898403 | 0.904971 | 0.901675 |
| Haitian Creole (hat) | 0.719298 | 0.518987 | 0.602941 |
| Hausa (hau) | 0.815353 | 0.829114 | 0.822176 |
| Serbo-Croatian (hbs) | 0.343465 | 0.244589 | 0.285714 |
| Hebrew (heb) | 0.891304 | 0.933941 | 0.912125 |
| Fiji Hindi (hif) | 0.662577 | 0.664615 | 0.663594 |
| Hindi (hin) | 0.782301 | 0.778169 | 0.780229 |
| Croatian (hrv) | 0.360308 | 0.374000 | 0.367026 |
| Upper Sorbian (hsb) | 0.745763 | 0.611111 | 0.671756 |
| Hungarian (hun) | 0.876812 | 0.846154 | 0.861210 |
| Armenian (hye) | 0.988201 | 0.917808 | 0.951705 |
| Igbo (ibo) | 0.825397 | 0.696429 | 0.755448 |
| Ido (ido) | 0.760479 | 0.814103 | 0.786378 |
| Interlingue (ile) | 0.701299 | 0.580645 | 0.635294 |
| Iloko (ilo) | 0.688356 | 0.844538 | 0.758491 |
| Interlingua (ina) | 0.577889 | 0.588235 | 0.583016 |
| Indonesian (ind) | 0.415879 | 0.514019 | 0.459770 |
| Icelandic (isl) | 0.855263 | 0.790754 | 0.821745 |
| Italian (ita) | 0.474576 | 0.561247 | 0.514286 |
| Jamaican Patois (jam) | 0.826087 | 0.791667 | 0.808511 |
| Javanese (jav) | 0.670130 | 0.658163 | 0.664093 |
| Lojban (jbo) | 0.896861 | 0.917431 | 0.907029 |
| Japanese (jpn) | 0.931373 | 0.848214 | 0.887850 |
| Karakalpak (kaa) | 0.790393 | 0.827744 | 0.808637 |
| Kabyle (kab) | 0.828571 | 0.759162 | 0.792350 |
| Kannada (kan) | 0.879357 | 0.847545 | 0.863158 |
| Georgian (kat) | 0.916399 | 0.907643 | 0.912000 |
| Kazakh (kaz) | 0.900901 | 0.819672 | 0.858369 |
| Kabardian (kbd) | 0.923345 | 0.892256 | 0.907534 |
| Central Khmer (khm) | 0.976667 | 0.816156 | 0.889226 |
| Kinyarwanda (kin) | 0.824324 | 0.726190 | 0.772152 |
| Kirghiz (kir) | 0.674766 | 0.779698 | 0.723447 |
| Komi-Permyak (koi) | 0.652830 | 0.633700 | 0.643123 |
| Konkani (kok) | 0.778865 | 0.728938 | 0.753075 |
| Komi (kom) | 0.737374 | 0.572549 | 0.644592 |
| Korean (kor) | 0.984615 | 0.967603 | 0.976035 |
| Karachay-Balkar (krc) | 0.869416 | 0.857627 | 0.863481 |
| Ripuarisch (ksh) | 0.709859 | 0.649485 | 0.678331 |
| Kurdish (kur) | 0.883777 | 0.862884 | 0.873206 |
| Ladino (lad) | 0.660920 | 0.576441 | 0.615797 |
| Lao (lao) | 0.986175 | 0.918455 | 0.951111 |
| Latin (lat) | 0.581250 | 0.636986 | 0.607843 |
| Latvian (lav) | 0.824513 | 0.797844 | 0.810959 |
| Lezghian (lez) | 0.898955 | 0.793846 | 0.843137 |
| Ligurian (lij) | 0.662903 | 0.677100 | 0.669927 |
| Limburgan (lim) | 0.615385 | 0.581818 | 0.598131 |
| Lingala (lin) | 0.836207 | 0.763780 | 0.798354 |
| Lithuanian (lit) | 0.756329 | 0.804714 | 0.779772 |
| Lombard (lmo) | 0.556818 | 0.536986 | 0.546722 |
| Northern Luri (lrc) | 0.838574 | 0.753296 | 0.793651 |
| Latgalian (ltg) | 0.759531 | 0.755102 | 0.757310 |
| Luxembourgish (ltz) | 0.645062 | 0.614706 | 0.629518 |
| Luganda (lug) | 0.787535 | 0.805797 | 0.796562 |
| Literary Chinese (lzh) | 0.921951 | 0.949749 | 0.935644 |
| Maithili (mai) | 0.777778 | 0.761658 | 0.769634 |
| Malayalam (mal) | 0.993377 | 0.949367 | 0.970874 |
| Banyumasan (map-bms) | 0.531429 | 0.453659 | 0.489474 |
| Marathi (mar) | 0.748744 | 0.818681 | 0.782152 |
| Moksha (mdf) | 0.728745 | 0.800000 | 0.762712 |
| Eastern Mari (mhr) | 0.790323 | 0.760870 | 0.775316 |
| Minangkabau (min) | 0.953271 | 0.886957 | 0.918919 |
| Macedonian (mkd) | 0.816399 | 0.849722 | 0.832727 |
| Malagasy (mlg) | 0.925187 | 0.918317 | 0.921739 |
| Maltese (mlt) | 0.869421 | 0.890017 | 0.879599 |
| Min Nan Chinese (nan) | 0.743707 | 0.820707 | 0.780312 |
| Mongolian (mon) | 0.852194 | 0.838636 | 0.845361 |
| Maori (mri) | 0.934726 | 0.937173 | 0.935948 |
| Western Mari (mrj) | 0.818792 | 0.827119 | 0.822934 |
| Malay (msa) | 0.508065 | 0.376119 | 0.432247 |
| Mirandese (mwl) | 0.650407 | 0.685225 | 0.667362 |
| Burmese (mya) | 0.995968 | 0.972441 | 0.984064 |
| Erzya (myv) | 0.475783 | 0.503012 | 0.489019 |
| Mazanderani (mzn) | 0.775362 | 0.701639 | 0.736661 |
| Neapolitan (nap) | 0.628993 | 0.595349 | 0.611708 |
| Navajo (nav) | 0.955882 | 0.937500 | 0.946602 |
| Classical Nahuatl (nci) | 0.679758 | 0.589005 | 0.631136 |
| Low German (nds) | 0.669789 | 0.690821 | 0.680143 |
| West Low German (nds-nl) | 0.513889 | 0.504545 | 0.509174 |
| Nepali (macrolanguage) (nep) | 0.640476 | 0.649758 | 0.645084 |
| Newari (new) | 0.928571 | 0.745902 | 0.827273 |
| Dutch (nld) | 0.553763 | 0.553763 | 0.553763 |
| Norwegian Nynorsk (nno) | 0.569277 | 0.519231 | 0.543103 |
| Bokmål (nob) | 0.519856 | 0.562500 | 0.540338 |
| Narom (nrm) | 0.691275 | 0.605882 | 0.645768 |
| Northern Sotho (nso) | 0.950276 | 0.815166 | 0.877551 |
| Occitan (oci) | 0.483444 | 0.366834 | 0.417143 |
| Livvi-Karelian (olo) | 0.816850 | 0.790780 | 0.803604 |
| Oriya (ori) | 0.981481 | 0.963636 | 0.972477 |
| Oromo (orm) | 0.885714 | 0.829218 | 0.856536 |
| Ossetian (oss) | 0.822006 | 0.855219 | 0.838284 |
| Pangasinan (pag) | 0.842105 | 0.715655 | 0.773748 |
| Pampanga (pam) | 0.770000 | 0.435028 | 0.555957 |
| Panjabi (pan) | 0.996154 | 0.984791 | 0.990440 |
| Papiamento (pap) | 0.674672 | 0.661670 | 0.668108 |
| Picard (pcd) | 0.407895 | 0.356322 | 0.380368 |
| Pennsylvania German (pdc) | 0.487047 | 0.509485 | 0.498013 |
| Palatine German (pfl) | 0.614173 | 0.570732 | 0.591656 |
| Western Panjabi (pnb) | 0.926267 | 0.887417 | 0.906426 |
| Polish (pol) | 0.797059 | 0.734417 | 0.764457 |
| Portuguese (por) | 0.500914 | 0.586724 | 0.540434 |
| Pushto (pus) | 0.941489 | 0.898477 | 0.919481 |
| Quechua (que) | 0.854167 | 0.797665 | 0.824950 |
| Tarantino dialect (roa-tara) | 0.669794 | 0.724138 | 0.695906 |
| Romansh (roh) | 0.745527 | 0.760649 | 0.753012 |
| Romanian (ron) | 0.805486 | 0.769048 | 0.786845 |
| Rusyn (rue) | 0.718543 | 0.645833 | 0.680251 |
| Aromanian (rup) | 0.288482 | 0.730245 | 0.413580 |
| Russian (rus) | 0.530120 | 0.690583 | 0.599805 |
| Yakut (sah) | 0.853521 | 0.865714 | 0.859574 |
| Sanskrit (san) | 0.931343 | 0.896552 | 0.913616 |
| Sicilian (scn) | 0.734139 | 0.618321 | 0.671271 |
| Scots (sco) | 0.571429 | 0.540816 | 0.555701 |
| Samogitian (sgs) | 0.829167 | 0.748120 | 0.786561 |
| Sinhala (sin) | 0.909474 | 0.935065 | 0.922092 |
| Slovak (slk) | 0.738235 | 0.665782 | 0.700139 |
| Slovene (slv) | 0.671123 | 0.662269 | 0.666667 |
| Northern Sami (sme) | 0.800676 | 0.825784 | 0.813036 |
| Shona (sna) | 0.761702 | 0.724696 | 0.742739 |
| Sindhi (snd) | 0.950172 | 0.946918 | 0.948542 |
| Somali (som) | 0.849462 | 0.802030 | 0.825065 |
| Spanish (spa) | 0.325234 | 0.413302 | 0.364017 |
| Albanian (sqi) | 0.875899 | 0.832479 | 0.853637 |
| Sardinian (srd) | 0.750000 | 0.711061 | 0.730012 |
| Sranan (srn) | 0.888889 | 0.771084 | 0.825806 |
| Serbian (srp) | 0.824561 | 0.814356 | 0.819427 |
| Saterfriesisch (stq) | 0.790087 | 0.734417 | 0.761236 |
| Sundanese (sun) | 0.764192 | 0.631769 | 0.691700 |
| Swahili (macrolanguage) (swa) | 0.763496 | 0.796247 | 0.779528 |
| Swedish (swe) | 0.838284 | 0.723647 | 0.776758 |
| Silesian (szl) | 0.819788 | 0.750809 | 0.783784 |
| Tamil (tam) | 0.985765 | 0.955172 | 0.970228 |
| Tatar (tat) | 0.469780 | 0.795349 | 0.590674 |
| Tulu (tcy) | 0.893300 | 0.873786 | 0.883436 |
| Telugu (tel) | 1.000000 | 0.913690 | 0.954899 |
| Tetum (tet) | 0.765116 | 0.744344 | 0.754587 |
| Tajik (tgk) | 0.828418 | 0.813158 | 0.820717 |
| Tagalog (tgl) | 0.751468 | 0.757396 | 0.754420 |
| Thai (tha) | 0.933884 | 0.807143 | 0.865900 |
| Tongan (ton) | 0.920245 | 0.923077 | 0.921659 |
| Tswana (tsn) | 0.873397 | 0.889070 | 0.881164 |
| Turkmen (tuk) | 0.898438 | 0.837887 | 0.867107 |
| Turkish (tur) | 0.666667 | 0.716981 | 0.690909 |
| Tuvan (tyv) | 0.857143 | 0.805063 | 0.830287 |
| Udmurt (udm) | 0.865517 | 0.756024 | 0.807074 |
| Uighur (uig) | 0.991597 | 0.967213 | 0.979253 |
| Ukrainian (ukr) | 0.771341 | 0.702778 | 0.735465 |
| Urdu (urd) | 0.877647 | 0.855505 | 0.866434 |
| Uzbek (uzb) | 0.655652 | 0.797040 | 0.719466 |
| Venetian (vec) | 0.611111 | 0.527233 | 0.566082 |
| Veps (vep) | 0.672862 | 0.688213 | 0.680451 |
| Vietnamese (vie) | 0.932406 | 0.914230 | 0.923228 |
| Vlaams (vls) | 0.594427 | 0.501305 | 0.543909 |
| Volapük (vol) | 0.765625 | 0.942308 | 0.844828 |
| Võro (vro) | 0.797203 | 0.740260 | 0.767677 |
| Waray (war) | 0.930876 | 0.930876 | 0.930876 |
| Walloon (wln) | 0.636804 | 0.693931 | 0.664141 |
| Wolof (wol) | 0.864220 | 0.845601 | 0.854809 |
| Wu Chinese (wuu) | 0.848921 | 0.830986 | 0.839858 |
| Xhosa (xho) | 0.837398 | 0.759214 | 0.796392 |
| Mingrelian (xmf) | 0.943396 | 0.874126 | 0.907441 |
| Yiddish (yid) | 0.955729 | 0.897311 | 0.925599 |
| Yoruba (yor) | 0.812010 | 0.719907 | 0.763190 |
| Zeeuws (zea) | 0.617737 | 0.550409 | 0.582133 |
| Cantonese (zh-yue) | 0.859649 | 0.649007 | 0.739623 |
| Standard Chinese (zho) | 0.845528 | 0.781955 | 0.812500 |
| accuracy | 0.749527 | 0.749527 | 0.749527 |
| macro avg | 0.762866 | 0.742101 | 0.749261 |
| weighted avg | 0.762006 | 0.749527 | 0.752910 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/zabanshenas/issues). |
1,273 | m3tafl0ps/autonlp-NLPIsFun-251844 | [
"negative",
"positive"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- m3tafl0ps/autonlp-data-NLPIsFun
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 251844
## Validation Metrics
- Loss: 0.38616305589675903
- Accuracy: 0.8356545961002786
- Precision: 0.8253968253968254
- Recall: 0.8571428571428571
- AUC: 0.9222387781709815
- F1: 0.8409703504043127
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/m3tafl0ps/autonlp-NLPIsFun-251844
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("m3tafl0ps/autonlp-NLPIsFun-251844", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("m3tafl0ps/autonlp-NLPIsFun-251844", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,274 | madhurjindal/autonlp-Gibberish-Detector-492513457 | [
"clean",
"mild gibberish",
"noise",
"word salad"
] | ---
tags: [autonlp]
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- madhurjindal/autonlp-data-Gibberish-Detector
co2_eq_emissions: 5.527544460835904
---
# Problem Description
The ability to process and understand user input is crucial for various applications, such as chatbots or downstream tasks. However, a common challenge faced in such systems is the presence of gibberish or nonsensical input. To address this problem, we present a project focused on developing a gibberish detector for the English language.
The primary goal of this project is to classify user input as either **gibberish** or **non-gibberish**, enabling more accurate and meaningful interactions with the system. We also aim to enhance the overall performance and user experience of chatbots and other systems that rely on user input.
>## What is Gibberish?
Gibberish refers to **nonsensical or meaningless language or text** that lacks coherence or any discernible meaning. It can be characterized by a combination of random words, nonsensical phrases, grammatical errors, or syntactical abnormalities that prevent the communication from conveying a clear and understandable message. Gibberish can vary in intensity, ranging from simple noise with no meaningful words to sentences that may appear superficially correct but lack coherence or logical structure when examined closely. Detecting and identifying gibberish is essential in various contexts, such as **natural language processing**, **chatbot systems**, **spam filtering**, and **language-based security measures**, to ensure effective communication and accurate processing of user inputs.
## Label Description
Thus, we break down the problem into 4 categories:
1. **Noise:** Gibberish at the zero level where even the different constituents of the input phrase (words) do not hold any meaning independently.
*For example: `dfdfer fgerfow2e0d qsqskdsd djksdnfkff swq.`*
2. **Word Salad:** Gibberish at level 1 where words make sense independently, but when looked at the bigger picture (the phrase) any meaning is not depicted.
*For example: `22 madhur old punjab pickle chennai`*
3. **Mild gibberish:** Gibberish at level 2 where there is a part of the sentence that has grammatical errors, word sense errors, or any syntactical abnormalities, which leads the sentence to miss out on a coherent meaning.
*For example: `Madhur study in a teacher`*
4. **Clean:** This category represents a set of words that form a complete and meaningful sentence on its own.
*For example: `I love this website`*
> **Tip:** To facilitate gibberish detection, you can combine the labels based on the desired level of detection. For instance, if you need to detect gibberish at level 1, you can group Noise and Word Salad together as "Gibberish," while considering Mild gibberish and Clean separately as "NotGibberish." This approach allows for flexibility in detecting and categorizing different levels of gibberish based on specific requirements.
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 492513457
- CO2 Emissions (in grams): 5.527544460835904
## Validation Metrics
- Loss: 0.07609463483095169
- Accuracy: 0.9735624586913417
- Macro F1: 0.9736173135739408
- Micro F1: 0.9735624586913417
- Weighted F1: 0.9736173135739408
- Macro Precision: 0.9737771415197378
- Micro Precision: 0.9735624586913417
- Weighted Precision: 0.9737771415197378
- Macro Recall: 0.9735624586913417
- Micro Recall: 0.9735624586913417
- Weighted Recall: 0.9735624586913417
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/madhurjindal/autonlp-Gibberish-Detector-492513457
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madhurjindal/autonlp-Gibberish-Detector-492513457", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madhurjindal/autonlp-Gibberish-Detector-492513457", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,275 | madlag/bert-large-uncased-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ## BERT-large finetuned on MNLI.
The [reference finetuned model](https://github.com/google-research/bert) has an accuracy of 86.05, we get 86.7:
```
{'eval_loss': 0.3984006643295288, 'eval_accuracy': 0.8667345899133979}
``` |
1,276 | marcelcastrobr/sagemaker-distilbert-emotion-2 | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9315
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1442
- Accuracy: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9316 | 1.0 | 500 | 0.2384 | 0.918 |
| 0.1849 | 2.0 | 1000 | 0.1599 | 0.9265 |
| 0.1047 | 3.0 | 1500 | 0.1442 | 0.9315 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
1,277 | marcelcastrobr/sagemaker-distilbert-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1477
- Accuracy: 0.928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9308 | 1.0 | 500 | 0.2632 | 0.916 |
| 0.1871 | 2.0 | 1000 | 0.1651 | 0.926 |
| 0.1025 | 3.0 | 1500 | 0.1477 | 0.928 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
1,278 | marcolatella/Hps_seed1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: Hps_seed1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7176561823314135
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hps_seed1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9681
- F1: 0.7177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.6525359309081455e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6553 | 1.0 | 1426 | 0.6275 | 0.7095 |
| 0.4945 | 2.0 | 2852 | 0.6181 | 0.7251 |
| 0.366 | 3.0 | 4278 | 0.7115 | 0.7274 |
| 0.2374 | 4.0 | 5704 | 0.8368 | 0.7133 |
| 0.1658 | 5.0 | 7130 | 0.9681 | 0.7177 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,279 | marcolatella/emotion_trained | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: emotion_trained
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7377785764567545
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9362
- F1: 0.7378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.7468 | 0.6599 |
| No log | 2.0 | 408 | 0.6829 | 0.7369 |
| 0.5184 | 3.0 | 612 | 0.8089 | 0.7411 |
| 0.5184 | 4.0 | 816 | 0.9362 | 0.7378 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,280 | marcolatella/emotion_trained_1234567 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: emotion_trained_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7328362995029661
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9045
- F1: 0.7328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.6480 | 0.7231 |
| No log | 2.0 | 408 | 0.6114 | 0.7403 |
| 0.5045 | 3.0 | 612 | 0.7593 | 0.7311 |
| 0.5045 | 4.0 | 816 | 0.9045 | 0.7328 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,281 | marcolatella/emotion_trained_31415 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: emotion_trained_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7213200335291519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9166
- F1: 0.7213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.6182 | 0.7137 |
| No log | 2.0 | 408 | 0.7472 | 0.6781 |
| 0.5084 | 3.0 | 612 | 0.8242 | 0.7236 |
| 0.5084 | 4.0 | 816 | 0.9166 | 0.7213 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,282 | marcolatella/emotion_trained_42 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: emotion_trained_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7319321237976675
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8988
- F1: 0.7319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.6131 | 0.6955 |
| No log | 2.0 | 408 | 0.5837 | 0.7270 |
| 0.5149 | 3.0 | 612 | 0.8925 | 0.7267 |
| 0.5149 | 4.0 | 816 | 0.8988 | 0.7319 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,288 | marcolatella/prova_Classi2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: prova_Classi2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.20192866271639365
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prova_Classi2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0183
- F1: 0.2019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002739353542073378
- train_batch_size: 32
- eval_batch_size: 16
- seed: 18
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0171 | 1.0 | 1426 | 1.0183 | 0.2019 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,289 | marcolatella/tweet_eval_bench | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: prova_Classi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.716
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prova_Classi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5530
- Accuracy: 0.716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00013441028267541125
- train_batch_size: 32
- eval_batch_size: 16
- seed: 17
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7022 | 1.0 | 1426 | 0.6581 | 0.7105 |
| 0.5199 | 2.0 | 2852 | 0.6835 | 0.706 |
| 0.2923 | 3.0 | 4278 | 0.7941 | 0.7075 |
| 0.1366 | 4.0 | 5704 | 1.0761 | 0.7115 |
| 0.0645 | 5.0 | 7130 | 1.5530 | 0.716 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,293 | marma/bert-base-swedish-cased-sentiment | [
"NEGATIVE",
"POSITIVE"
] | Experimental sentiment analysis based on ~20k of App Store reviews in Swedish.
### Usage
```python
from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='marma/bert-base-swedish-cased-sentiment')
>>> sa('Det här är ju fantastiskt!')
[{'label': 'POSITIVE', 'score': 0.9974609613418579}]
>>> sa('Den här appen suger!')
[{'label': 'NEGATIVE', 'score': 0.998340368270874}]
>>> sa('Det är fruktansvärt.')
[{'label': 'NEGATIVE', 'score': 0.998340368270874}]
>>> sa('Det är fruktansvärt bra.')
[{'label': 'POSITIVE', 'score': 0.998340368270874}]
``` |
1,294 | martin-ha/toxic-comment-model | [
"non-toxic",
"toxic"
] | ---
language: en
---
## Model description
This model is a fine-tuned version of the [DistilBERT model](https://huggingface.co/transformers/model_doc/distilbert.html) to classify toxic comments.
## How to use
You can use the model with the following code.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline
model_path = "martin-ha/toxic-comment-model"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(pipeline('This is a test text.'))
```
## Limitations and Bias
This model is intended to use for classify toxic online classifications. However, one limitation of the model is that it performs poorly for some comments that mention a specific identity subgroup, like Muslim. The following table shows a evaluation score for different identity group. You can learn the specific meaning of this metrics [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview/evaluation). But basically, those metrics shows how well a model performs for a specific group. The larger the number, the better.
| **subgroup** | **subgroup_size** | **subgroup_auc** | **bpsn_auc** | **bnsp_auc** |
| ----------------------------- | ----------------- | ---------------- | ------------ | ------------ |
| muslim | 108 | 0.689 | 0.811 | 0.88 |
| jewish | 40 | 0.749 | 0.86 | 0.825 |
| homosexual_gay_or_lesbian | 56 | 0.795 | 0.706 | 0.972 |
| black | 84 | 0.866 | 0.758 | 0.975 |
| white | 112 | 0.876 | 0.784 | 0.97 |
| female | 306 | 0.898 | 0.887 | 0.948 |
| christian | 231 | 0.904 | 0.917 | 0.93 |
| male | 225 | 0.922 | 0.862 | 0.967 |
| psychiatric_or_mental_illness | 26 | 0.924 | 0.907 | 0.95 |
The table above shows that the model performs poorly for the muslim and jewish group. In fact, you pass the sentence "Muslims are people who follow or practice Islam, an Abrahamic monotheistic religion." Into the model, the model will classify it as toxic. Be mindful for this type of potential bias.
## Training data
The training data comes this [Kaggle competition](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data). We use 10% of the `train.csv` data to train the model.
## Training procedure
You can see [this documentation and codes](https://github.com/MSIA/wenyang_pan_nlp_project_2021) for how we train the model. It takes about 3 hours in a P-100 GPU.
## Evaluation results
The model achieves 94% accuracy and 0.59 f1-score in a 10000 rows held-out test set. |
1,295 | masapasa/sagemaker-distilbert-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.915
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2590
- Accuracy: 0.915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9292 | 1.0 | 500 | 0.2590 | 0.915 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
1,296 | mateocolina/xlm-roberta-base-finetuned-marc-en | [
"good",
"great",
"ok",
"poor",
"terrible"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9276
- Mae: 0.5366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0992 | 1.0 | 235 | 0.9340 | 0.5122 |
| 0.945 | 2.0 | 470 | 0.9276 | 0.5366 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,300 | mattmcclean/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9252235175634111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2173
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.825 | 1.0 | 250 | 0.2925 | 0.915 | 0.9134 |
| 0.2444 | 2.0 | 500 | 0.2173 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,301 | maximedb/autonlp-vaccinchat-22134694 | [
"chitchat_ask_bye",
"chitchat_ask_hi",
"chitchat_ask_hi_de",
"chitchat_ask_hi_en",
"chitchat_ask_hi_fr",
"chitchat_ask_hoe_gaat_het",
"chitchat_ask_name",
"chitchat_ask_thanks",
"faq_ask_aantal_gevaccineerd",
"faq_ask_aantal_gevaccineerd_wereldwijd",
"faq_ask_afspraak_afzeggen",
"faq_ask_afspr... | ---
tags: autonlp
language: nl
widget:
- text: "I love AutoNLP 🤗"
datasets:
- maximedb/autonlp-data-vaccinchat
co2_eq_emissions: 14.525955245648218
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 22134694
- CO2 Emissions (in grams): 14.525955245648218
## Validation Metrics
- Loss: 1.7039562463760376
- Accuracy: 0.6369376479873717
- Macro F1: 0.5363181342408181
- Micro F1: 0.6369376479873717
- Weighted F1: 0.6309793486221543
- Macro Precision: 0.5533353910494714
- Micro Precision: 0.6369376479873717
- Weighted Precision: 0.676981050732216
- Macro Recall: 0.5828723356986293
- Micro Recall: 0.6369376479873717
- Weighted Recall: 0.6369376479873717
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/maximedb/autonlp-vaccinchat-22134694
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("maximedb/autonlp-vaccinchat-22134694", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("maximedb/autonlp-vaccinchat-22134694", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,305 | mazancourt/politics-sentence-classifier | [
"other",
"problem",
"solution"
] | ---
tags: [autonlp, Text Classification, Politics]
language: fr
widget:
- text: "Il y a dans ce pays une fracture"
datasets:
- mazancourt/autonlp-data-politics-sentence-classifier
co2_eq_emissions: 1.06099358268878
---
# Prediction of sentence "nature" in a French political sentence
This model aims at predicting the nature of a sentence in a French political sentence.
The predictions fall in three categories:
- `problem`: the sentence describes a problem (usually to be tackled by the speaker), for example _il y a dans ce pays une fracture_ (J. Chirac)
- `solution`: the sentences describes a solution (typically part of a political programme), for example: _J’ai supprimé les droits de succession parce que je crois au travail et parce que je crois à la famille._ (N. Sarkozy)
- `other`: the sentence does not belong to any of these categories, for example: _vive la République, vive la France_
This model was trained using AutoNLP based on sentences extracted from a mix of political tweets and speeches.
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 23105051
- CO2 Emissions (in grams): 1.06099358268878
## Validation Metrics
- Loss: 0.6050735712051392
- Accuracy: 0.8097826086956522
- Macro F1: 0.7713543865034599
- Micro F1: 0.8097826086956522
- Weighted F1: 0.8065488494385247
- Macro Precision: 0.7861074705111403
- Micro Precision: 0.8097826086956522
- Weighted Precision: 0.806470454156932
- Macro Recall: 0.7599656456873758
- Micro Recall: 0.8097826086956522
- Weighted Recall: 0.8097826086956522
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Il y a dans ce pays une fracture"}' https://api-inference.huggingface.co/models/mazancourt/politics-sentence-classifier
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mazancourt/autonlp-politics-sentence-classifier-23105051", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mazancourt/politics-sentence-classifier", use_auth_token=True)
inputs = tokenizer("Il y a dans ce pays une fracture", return_tensors="pt")
outputs = model(**inputs)
# Category can be "problem", "solution" or "other"
category = outputs[0]["label"]
score = outputs[0]["score"]
``` |
1,306 | mdhugol/indonesia-bert-sentiment-classification | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Indonesian BERT Base Sentiment Classifier is a sentiment-text-classification model. The model was originally the pre-trained [IndoBERT Base Model (phase1 - uncased)](https://huggingface.co/indobenchmark/indobert-base-p1) model using [Prosa sentiment dataset](https://github.com/indobenchmark/indonlu/tree/master/dataset/smsa_doc-sentiment-prosa)
## How to Use
### As Text Classifier
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
pretrained= "mdhugol/indonesia-bert-sentiment-classification"
model = AutoModelForSequenceClassification.from_pretrained(pretrained)
tokenizer = AutoTokenizer.from_pretrained(pretrained)
sentiment_analysis = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
label_index = {'LABEL_0': 'positive', 'LABEL_1': 'neutral', 'LABEL_2': 'negative'}
pos_text = "Sangat bahagia hari ini"
neg_text = "Dasar anak sialan!! Kurang ajar!!"
result = sentiment_analysis(pos_text)
status = label_index[result[0]['label']]
score = result[0]['score']
print(f'Text: {pos_text} | Label : {status} ({score * 100:.3f}%)')
result = sentiment_analysis(neg_text)
status = label_index[result[0]['label']]
score = result[0]['score']
print(f'Text: {neg_text} | Label : {status} ({score * 100:.3f}%)')
``` |
1,307 | mdraw/german-news-sentiment-bert | [
"negative",
"neutral",
"positive"
] | # German sentiment BERT finetuned on news data
Sentiment analysis model based on https://huggingface.co/oliverguhr/german-sentiment-bert, with additional training on German news texts about migration.
This model is part of the project https://github.com/text-analytics-20/news-sentiment-development, which explores sentiment development in German news articles about migration between 2007 and 2019.
Code for inference (predicting sentiment polarity) on raw text can be found at https://github.com/text-analytics-20/news-sentiment-development/blob/main/sentiment_analysis/bert.py
If you are not interested in polarity but just want to predict discrete class labels (0: positive, 1: negative, 2: neutral), you can also use the model with Oliver Guhr's `germansentiment` package as follows:
First install the package from PyPI:
```bash
pip install germansentiment
```
Then you can use the model in Python:
```python
from germansentiment import SentimentModel
model = SentimentModel('mdraw/german-news-sentiment-bert')
# Examples from our validation dataset
texts = [
'[...], schwärmt der parteilose Vizebürgermeister und Historiker Christian Matzka von der "tollen Helferszene".',
'Flüchtlingsheim 11.05 Uhr: Massenschlägerei',
'Rotterdam habe einen Migrantenanteil von mehr als 50 Prozent.',
]
result = model.predict_sentiment(texts)
print(result)
```
The code above will print:
```python
['positive', 'negative', 'neutral']
```
|
1,308 | medA/autonlp-FR_another_test-565016091 | [
"BODY_SHAMING",
"HATE",
"HOMOPHOBIA",
"INSULT",
"MISOGYNY",
"MORAL_HARASSMENT",
"NEUTRAL",
"RACISM",
"SEXUAL_HARASSMENT",
"SUPPORTIVE",
"THREAT",
"TROLL"
] | ---
tags: autonlp
language: fr
widget:
- text: "I love AutoNLP 🤗"
datasets:
- medA/autonlp-data-FR_another_test
co2_eq_emissions: 70.54639641012226
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 565016091
- CO2 Emissions (in grams): 70.54639641012226
## Validation Metrics
- Loss: 0.5170354247093201
- Accuracy: 0.8545909432074056
- Macro F1: 0.7910662503820883
- Micro F1: 0.8545909432074056
- Weighted F1: 0.8539837213761081
- Macro Precision: 0.8033640381948799
- Micro Precision: 0.8545909432074056
- Weighted Precision: 0.856160322286008
- Macro Recall: 0.7841845637031052
- Micro Recall: 0.8545909432074056
- Weighted Recall: 0.8545909432074056
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/medA/autonlp-FR_another_test-565016091
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("medA/autonlp-FR_another_test-565016091", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("medA/autonlp-FR_another_test-565016091", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,313 | mgrella/autonlp-bank-transaction-classification-5521155 | [
"Category.BILLS_SUBSCRIPTIONS_BILLS",
"Category.BILLS_SUBSCRIPTIONS_INTERNET_PHONE",
"Category.BILLS_SUBSCRIPTIONS_OTHER",
"Category.BILLS_SUBSCRIPTIONS_SUBSCRIPTIONS",
"Category.CREDIT_CARDS_CREDIT_CARDS",
"Category.EATING_OUT_COFFEE_SHOPS",
"Category.EATING_OUT_OTHER",
"Category.EATING_OUT_RESTAURAN... | ---
tags: autonlp
language: it
widget:
- text: "I love AutoNLP 🤗"
datasets:
- mgrella/autonlp-data-bank-transaction-classification
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 5521155
## Validation Metrics
- Loss: 1.3173143863677979
- Accuracy: 0.8220706757594545
- Macro F1: 0.5713688384455807
- Micro F1: 0.8220706757594544
- Weighted F1: 0.8217158913702755
- Macro Precision: 0.6064387992817253
- Micro Precision: 0.8220706757594545
- Weighted Precision: 0.8491515834140735
- Macro Recall: 0.5873349311175117
- Micro Recall: 0.8220706757594545
- Weighted Recall: 0.8220706757594545
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/mgrella/autonlp-bank-transaction-classification-5521155
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mgrella/autonlp-bank-transaction-classification-5521155", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mgrella/autonlp-bank-transaction-classification-5521155", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,321 | microsoft/deberta-base-mnli | [
"CONTRADICTION",
"NEUTRAL",
"ENTAILMENT"
] | ---
language: en
tags:
- deberta-v1
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This model is the base DeBERTa model fine-tuned with MNLI task
#### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and MNLI tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m |
|-------------------|-----------|-----------|--------|
| RoBERTa-base | 91.5/84.6 | 83.7/80.5 | 87.6 |
| XLNet-Large | -/- | -/80.2 | 86.8 |
| **DeBERTa-base** | 93.1/87.2 | 86.2/83.1 | 88.8 |
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
1,322 | microsoft/deberta-large-mnli | [
"CONTRADICTION",
"NEUTRAL",
"ENTAILMENT"
] | ---
language: en
tags:
- deberta-v1
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa large model fine-tuned with MNLI task.
#### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mrpc
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
1,323 | microsoft/deberta-v2-xlarge-mnli | [
"CONTRADICTION",
"NEUTRAL",
"ENTAILMENT"
] | ---
language: en
tags:
- deberta
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This the DeBERTa V2 xlarge model fine-tuned with MNLI task, 24 layers, 1536 hidden size. Total parameters 900M.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mrpc
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
1,324 | microsoft/deberta-v2-xxlarge-mnli | [
"CONTRADICTION",
"NEUTRAL",
"ENTAILMENT"
] | ---
language: en
tags:
- deberta
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This the DeBERTa V2 XXLarge model fine-tuned with MNLI task, 48 layers, 1536 hidden size. Total parameters 1.5B.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=rte
output_dir="ds_results"
num_gpus=8
batch_size=4
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge-mnli \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=rte
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge-mnli \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 4 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
1,325 | microsoft/deberta-xlarge-mnli | [
"CONTRADICTION",
"NEUTRAL",
"ENTAILMENT"
] | ---
language: en
tags:
- deberta-v1
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This the DeBERTa xlarge model(750M) fine-tuned with mnli task.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mrpc
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
1,326 | microsoft/tapex-base-finetuned-tabfact | [
"Entailed",
"Refused"
] | ---
language: en
tags:
- tapex
datasets:
- tab_fact
license: mit
---
# TAPEX (base-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
This model is the `tapex-base` model fine-tuned on the [Tabfact](https://huggingface.co/datasets/tab_fact) dataset.
## Intended Uses
You can use the model for table fact verficiation.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForSequenceClassification
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base-finetuned-tabfact")
model = BartForSequenceClassification.from_pretrained("microsoft/tapex-base-finetuned-tabfact")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "beijing hosts the olympic games in 2012"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model(**encoding)
output_id = int(outputs.logits[0].argmax(dim=0))
print(model.config.id2label[output_id])
# Refused
```
### How to Eval
Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` |
1,327 | microsoft/tapex-large-finetuned-tabfact | [
"LABEL_0",
"LABEL_1"
] | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- tab_fact
license: mit
---
# TAPEX (large-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
This model is the `tapex-base` model fine-tuned on the [Tabfact](https://huggingface.co/datasets/tab_fact) dataset.
## Intended Uses
You can use the model for table fact verficiation.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForSequenceClassification
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
model = BartForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "beijing hosts the olympic games in 2012"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model(**encoding)
output_id = int(outputs.logits[0].argmax(dim=0))
print(model.config.id2label[output_id])
# Refused
```
### How to Eval
Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` |
1,331 | milyiyo/distilbert-base-uncased-finetuned-amazon-review | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-uncased-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.693
- name: F1
type: f1
value: 0.7002653469272611
- name: Precision
type: precision
value: 0.709541681233075
- name: Recall
type: recall
value: 0.693
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-amazon-review
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3494
- Accuracy: 0.693
- F1: 0.7003
- Precision: 0.7095
- Recall: 0.693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.5 | 500 | 0.8287 | 0.7104 | 0.7120 | 0.7152 | 0.7104 |
| 0.4238 | 1.0 | 1000 | 0.8917 | 0.7094 | 0.6989 | 0.6917 | 0.7094 |
| 0.4238 | 1.5 | 1500 | 0.9367 | 0.6884 | 0.6983 | 0.7151 | 0.6884 |
| 0.3152 | 2.0 | 2000 | 0.9845 | 0.7116 | 0.7144 | 0.7176 | 0.7116 |
| 0.3152 | 2.5 | 2500 | 1.0752 | 0.6814 | 0.6968 | 0.7232 | 0.6814 |
| 0.2454 | 3.0 | 3000 | 1.1215 | 0.6918 | 0.6954 | 0.7068 | 0.6918 |
| 0.2454 | 3.5 | 3500 | 1.2905 | 0.6976 | 0.7048 | 0.7138 | 0.6976 |
| 0.1989 | 4.0 | 4000 | 1.2938 | 0.694 | 0.7016 | 0.7113 | 0.694 |
| 0.1989 | 4.5 | 4500 | 1.3623 | 0.6972 | 0.7014 | 0.7062 | 0.6972 |
| 0.1746 | 5.0 | 5000 | 1.3494 | 0.693 | 0.7003 | 0.7095 | 0.693 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
1,332 | milyiyo/electra-base-gen-finetuned-amazon-review | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: electra-base-gen-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.5024
- name: F1
type: f1
value: 0.5063190059782597
- name: Precision
type: precision
value: 0.5121183330982292
- name: Recall
type: recall
value: 0.5024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-gen-finetuned-amazon-review
This model is a fine-tuned version of [mrm8488/electricidad-base-generator](https://huggingface.co/mrm8488/electricidad-base-generator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8030
- Accuracy: 0.5024
- F1: 0.5063
- Precision: 0.5121
- Recall: 0.5024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|:---------:|:------:|
| 0.5135 | 1.0 | 1000 | 0.4886 | 0.4929 | 1.6580 | 0.5077 | 0.4886 |
| 0.4138 | 2.0 | 2000 | 0.5044 | 0.5093 | 1.7951 | 0.5183 | 0.5044 |
| 0.4244 | 3.0 | 3000 | 0.5022 | 0.5068 | 1.8108 | 0.5141 | 0.5022 |
| 0.4231 | 6.0 | 6000 | 1.7636 | 0.4972 | 0.5018 | 0.5092 | 0.4972 |
| 0.3574 | 7.0 | 7000 | 1.8030 | 0.5024 | 0.5063 | 0.5121 | 0.5024 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
1,333 | milyiyo/electra-small-finetuned-amazon-review | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: electra-small-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.5504
- name: F1
type: f1
value: 0.5457527808330634
- name: Precision
type: precision
value: 0.5428695841337288
- name: Recall
type: recall
value: 0.5504
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-finetuned-amazon-review
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0560
- Accuracy: 0.5504
- F1: 0.5458
- Precision: 0.5429
- Recall: 0.5504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.2172 | 1.0 | 1000 | 1.1014 | 0.5216 | 0.4902 | 0.4954 | 0.5216 |
| 1.0027 | 2.0 | 2000 | 1.0388 | 0.549 | 0.5471 | 0.5494 | 0.549 |
| 0.9035 | 3.0 | 3000 | 1.0560 | 0.5504 | 0.5458 | 0.5429 | 0.5504 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
1,334 | milyiyo/minilm-finetuned-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: minilm-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.931192
---
Based model: [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased)
Dataset: [emotion](https://huggingface.co/datasets/emotion)
These are the results on the evaluation set:
| Attribute | Value |
| ------------------ | -------- |
| Training Loss | 0.163100 |
| Validation Loss | 0.192153 |
| F1 | 0.931192 |
|
1,335 | milyiyo/multi-minilm-finetuned-amazon-review | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: multi-minilm-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.5422
- name: F1
type: f1
value: 0.543454465221178
- name: Precision
type: precision
value: 0.5452336215624385
- name: Recall
type: recall
value: 0.5422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-minilm-finetuned-amazon-review
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2436
- Accuracy: 0.5422
- F1: 0.5435
- Precision: 0.5452
- Recall: 0.5422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0049 | 1.0 | 2500 | 1.0616 | 0.5352 | 0.5268 | 0.5347 | 0.5352 |
| 0.9172 | 2.0 | 5000 | 1.0763 | 0.5432 | 0.5412 | 0.5444 | 0.5432 |
| 0.8285 | 3.0 | 7500 | 1.1077 | 0.5408 | 0.5428 | 0.5494 | 0.5408 |
| 0.7361 | 4.0 | 10000 | 1.1743 | 0.5342 | 0.5399 | 0.5531 | 0.5342 |
| 0.6538 | 5.0 | 12500 | 1.2436 | 0.5422 | 0.5435 | 0.5452 | 0.5422 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
1,336 | milyiyo/selectra-small-finetuned-amazon-review | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: selectra-small-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.737
- name: F1
type: f1
value: 0.7437773019932409
- name: Precision
type: precision
value: 0.7524857881639091
- name: Recall
type: recall
value: 0.737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# selectra-small-finetuned-amazon-review
This model is a fine-tuned version of [Recognai/selectra_small](https://huggingface.co/Recognai/selectra_small) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6279
- Accuracy: 0.737
- F1: 0.7438
- Precision: 0.7525
- Recall: 0.737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.5 | 500 | 0.7041 | 0.7178 | 0.6724 | 0.6715 | 0.7178 |
| 0.7908 | 1.0 | 1000 | 0.6365 | 0.7356 | 0.7272 | 0.7211 | 0.7356 |
| 0.7908 | 1.5 | 1500 | 0.6204 | 0.7376 | 0.7380 | 0.7387 | 0.7376 |
| 0.6358 | 2.0 | 2000 | 0.6162 | 0.7386 | 0.7377 | 0.7380 | 0.7386 |
| 0.6358 | 2.5 | 2500 | 0.6228 | 0.7274 | 0.7390 | 0.7576 | 0.7274 |
| 0.5827 | 3.0 | 3000 | 0.6188 | 0.7378 | 0.7400 | 0.7425 | 0.7378 |
| 0.5827 | 3.5 | 3500 | 0.6246 | 0.7374 | 0.7416 | 0.7467 | 0.7374 |
| 0.5427 | 4.0 | 4000 | 0.6266 | 0.7446 | 0.7452 | 0.7465 | 0.7446 |
| 0.5427 | 4.5 | 4500 | 0.6331 | 0.7392 | 0.7421 | 0.7456 | 0.7392 |
| 0.5184 | 5.0 | 5000 | 0.6279 | 0.737 | 0.7438 | 0.7525 | 0.737 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
1,338 | ml6team/distilbert-base-dutch-cased-toxic-comments | [
"non-toxic",
"toxic"
] | ---
language:
- nl
tags:
- text-classification
- pytorch
widget:
- text: "Ik heb je lief met heel mijn hart"
example_title: "Non toxic comment 1"
- text: "Dat is een goed punt, zo had ik het nog niet bekeken."
example_title: "Non toxic comment 2"
- text: "Wat de fuck zei je net tegen me, klootzak?"
example_title: "Toxic comment 1"
- text: "Rot op, vuile hoerenzoon."
example_title: "Toxic comment 2"
license: apache-2.0
metrics:
- Accuracy, F1 Score, Recall, Precision
---
# distilbert-base-dutch-toxic-comments
## Model description:
This model was created with the purpose to detect toxic or potentially harmful comments.
For this model, we finetuned a multilingual distilbert model [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the translated [Jigsaw Toxicity dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge).
The original dataset was translated using the appropriate [MariantMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl).
The model was trained for 2 epochs, on 90% of the dataset, with the following arguments:
```
training_args = TrainingArguments(
learning_rate=3e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
gradient_accumulation_steps=4,
load_best_model_at_end=True,
metric_for_best_model="recall",
epochs=2,
evaluation_strategy="steps",
save_strategy="steps",
save_total_limit=10,
logging_steps=100,
eval_steps=250,
save_steps=250,
weight_decay=0.001,
report_to="wandb")
```
## Model Performance:
Model evaluation was done on 1/10th of the dataset, which served as the test dataset.
| Accuracy | F1 Score | Recall | Precision |
| --- | --- | --- | --- |
| 95.75 | 78.88 | 77.23 | 80.61 |
## Dataset:
Unfortunately we cannot open-source the dataset, since we are bound by the underlying Jigsaw license.
|
1,339 | ml6team/distilbert-base-german-cased-toxic-comments | [
"non_toxic",
"toxic"
] | ---
language:
- de
tags:
- distilbert
- german
- classification
datasets:
- germeval21
widget:
- text: "Das ist ein guter Punkt, so hatte ich das noch nicht betrachtet."
example_title: "Agreement (non-toxic)"
- text: "Wow, was ein geiles Spiel. Glückwunsch."
example_title: "Football (non-toxic)"
- text: "Halt deine scheiß Fresse, du Arschloch"
example_title: "Silence (toxic)"
- text: "Verpiss dich, du dreckiger Hurensohn."
example_title: "Dismiss (toxic)"
---
# German Toxic Comment Classification
## Model Description
This model was created with the purpose to detect toxic or potentially harmful comments.
For this model, we fine-tuned a German DistilBERT model [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on a combination of five German datasets containing toxicity, profanity, offensive, or hate speech.
## Intended Uses & Limitations
This model can be used to detect toxicity in German comments.
However, the definition of toxicity is vague and the model might not be able to detect all instances of toxicity.
It will not be able to detect toxicity in languages other than German.
## How to Use
```python
from transformers import pipeline
model_hub_url = 'https://huggingface.co/ml6team/distilbert-base-german-cased-toxic-comments'
model_name = 'ml6team/distilbert-base-german-cased-toxic-comments'
toxicity_pipeline = pipeline('text-classification', model=model_name, tokenizer=model_name)
comment = "Ein harmloses Beispiel"
result = toxicity_pipeline(comment)[0]
print(f"Comment: {comment}\nLabel: {result['label']}, score: {result['score']}")
```
## Limitations and Bias
The model was trained on a combinations of datasets that contain examples gathered from different social networks and internet communities. This only represents a narrow subset of possible instances of toxicity and instances in other domains might not be detected reliably.
## Training Data
The training dataset combines the following five datasets:
* GermEval18 [[dataset](https://github.com/uds-lsv/GermEval-2018-Data)]
* Labels: abuse, profanity, toxicity
* GermEval21 [[dataset](https://github.com/germeval2021toxic/SharedTask/tree/main/Data%20Sets)]
* Labels: toxicity
* IWG Hatespeech dataset [[paper](https://arxiv.org/pdf/1701.08118.pdf), [dataset](https://github.com/UCSM-DUE/IWG_hatespeech_public)]
* Labels: hate speech
* Detecting Offensive Statements Towards Foreigners in Social Media (2017) by Breitschneider and Peters [[dataset](http://ub-web.de/research/)]
* Labels: hate
* HASOC: 2019 Hate Speech and Offensive Content [[dataset](https://hasocfire.github.io/hasoc/2019/index.html)]
* Labels: offensive, profanity, hate
The datasets contains different labels ranging from profanity, over hate speech to toxicity. In the combined dataset these labels were subsumed as `toxic` and `non-toxic` and contains 23,515 examples in total.
Note that the datasets vary substantially in the number of examples.
## Training Procedure
The training and test set were created using either the predefined train/test splits where available and otherwise 80% of the examples for training and 20% for testing. This resulted in in 17,072 training examples and 6,443 test examples.
The model was trained for 2 epochs with the following arguments:
```python
training_args = TrainingArguments(
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=2,
evaluation_strategy="steps",
logging_strategy="steps",
logging_steps=100,
save_total_limit=5,
learning_rate=2e-5,
weight_decay=0.01,
metric_for_best_model='accuracy',
load_best_model_at_end=True
)
```
## Evaluation Results
Model evaluation was done on 1/10th of the dataset, which served as the test dataset.
| Accuracy | F1 Score | Recall | Precision |
| -------- | -------- | -------- | ----------- |
| 78.50 | 50.34 | 39.22 | 70.27 |
|
1,340 | ml6team/robbert-dutch-base-toxic-comments | [
"non-toxic",
"toxic"
] | ---
language:
- nl
tags:
- text-classification
- pytorch
widget:
- text: "Ik heb je lief met heel mijn hart"
example_title: "Non toxic comment 1"
- text: "Dat is een goed punt, zo had ik het nog niet bekeken."
example_title: "Non toxic comment 2"
- text: "Wat de fuck zei je net tegen me, klootzak?"
example_title: "Toxic comment 1"
- text: "Rot op, vuile hoerenzoon."
example_title: "Toxic comment 2"
license: apache-2.0
metrics:
- Accuracy, F1 Score, Recall, Precision
---
# RobBERT-dutch-base-toxic-comments
## Model description:
This model was created with the purpose to detect toxic or potentially harmful comments.
For this model, we finetuned a dutch RobBerta-based model called [RobBERT](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the translated [Jigsaw Toxicity dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge).
The original dataset was translated using the appropriate [MariantMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl).
The model was trained for 2 epochs, on 90% of the dataset, with the following arguments:
```
training_args = TrainingArguments(
learning_rate=1e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
gradient_accumulation_steps=6,
load_best_model_at_end=True,
metric_for_best_model="recall",
epochs=2,
evaluation_strategy="steps",
save_strategy="steps",
save_total_limit=10,
logging_steps=100,
eval_steps=250,
save_steps=250,
weight_decay=0.001,
report_to="wandb")
```
## Model Performance:
Model evaluation was done on 1/10th of the dataset, which served as the test dataset.
| Accuracy | F1 Score | Recall | Precision |
| --- | --- | --- | --- |
| 95.63 | 78.80 | 78.99 | 78.61 |
## Dataset:
Unfortunately we cannot open-source the dataset, since we are bound by the underlying Jigsaw license.
|
1,341 | mlkorra/OGBV-gender-bert-hi-en | [
"NGEN",
"GEN"
] | ## BERT Model for OGBV gendered text classification
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("mlkorra/OGBV-gender-bert-hi-en")
model = AutoModelForSequenceClassification.from_pretrained("mlkorra/OGBV-gender-bert-hi-en")
```
## Model Performance
|Metric|dev|test|
|---|--|--|
|Accuracy|0.88|0.81|
|F1(weighted)|0.86|0.80|
|
1,342 | mmcquade11/autonlp-imdb-test-21134442 | [
"negative",
"positive"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- mmcquade11/autonlp-data-imdb-test
co2_eq_emissions: 298.7849611952843
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 21134442
- CO2 Emissions (in grams): 298.7849611952843
## Validation Metrics
- Loss: 0.21618066728115082
- Accuracy: 0.9393
- Precision: 0.9360730593607306
- Recall: 0.943
- AUC: 0.98362804
- F1: 0.9395237620803029
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/mmcquade11/autonlp-imdb-test-21134442
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mmcquade11/autonlp-imdb-test-21134442", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mmcquade11/autonlp-imdb-test-21134442", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,343 | mmcquade11/autonlp-imdb-test-21134453 | [
"negative",
"positive"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- mmcquade11/autonlp-data-imdb-test
co2_eq_emissions: 38.102565360610484
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 21134453
- CO2 Emissions (in grams): 38.102565360610484
## Validation Metrics
- Loss: 0.172550767660141
- Accuracy: 0.9355
- Precision: 0.9362853135644159
- Recall: 0.9346
- AUC: 0.98267064
- F1: 0.9354418977079372
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/mmcquade11/autonlp-imdb-test-21134453
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mmcquade11/autonlp-imdb-test-21134453", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mmcquade11/autonlp-imdb-test-21134453", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,344 | mnaylor/base-bert-finetuned-mtsamples | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | # BERT Base Fine-tuned on MTSamples
This model is [BERT-base](https://huggingface.co/bert-base-uncased) fine-tuned on the MTSamples dataset, with a classification task defined in [this repo](https://github.com/socd06/medical-nlp).
|
1,346 | mnaylor/bioclinical-bert-finetuned-mtsamples | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | # BioClinical BERT Fine-tuned on MTSamples
This model is simply [Alsentzer's Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) fine-tuned on the MTSamples dataset, with a classification task defined in [this repo](https://github.com/socd06/medical-nlp). |
1,350 | mofawzy/bert-arsentd-lev | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- ar
datasets:
- ArSentD-LEV
tags:
- ArSentD-LEV
widget:
- text: "يهدي الله من يشاء"
- text: "الاسلوب قذر وقمامه"
---
# bert-arsentd-lev
Arabic version bert model fine tuned on ArSentD-LEV dataset
## Data
The model were fine-tuned on ~4000 sentence from twitter multiple dialect and five classes we used 3 out of 5 int the experiment.
## Results
| class | precision | recall | f1-score | Support |
|----------|-----------|--------|----------|---------|
| 0 | 0.8211 | 0.8080 | 0.8145 | 125 |
| 1 | 0.7174 | 0.7857 | 0.7500 | 84 |
| 2 | 0.6867 | 0.6404 | 0.6628 | 89 |
| Accuracy | | | 0.7517 | 298 |
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name="mofawzy/bert-arsentd-lev"
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=3)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
|
1,354 | morenolq/SumTO_FNS2020 | [
"LABEL_0"
] | This is the *best performing* model used in the paper: "End-to-end Training For Financial Report Summarization"
https://www.aclweb.org/anthology/2020.fnp-1.20/ |
1,355 | moshew/bert-small-aug-sst2-distilled | [
"0",
"1"
] | Accuracy = 92 |
1,356 | moshew/miny-bert-aug-sst2-distilled | [
"0",
"1"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- augmented_glue_sst2
metrics:
- accuracy
model-index:
- name: miny-bert-aug-sst2-distilled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: augmented_glue_sst2
type: augmented_glue_sst2
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9128440366972477
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# miny-bert-aug-sst2-distilled
This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the augmented_glue_sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2643
- Accuracy: 0.9128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.602 | 1.0 | 6227 | 0.3389 | 0.9186 |
| 0.4195 | 2.0 | 12454 | 0.2989 | 0.9151 |
| 0.3644 | 3.0 | 18681 | 0.2794 | 0.9117 |
| 0.3304 | 4.0 | 24908 | 0.2793 | 0.9106 |
| 0.3066 | 5.0 | 31135 | 0.2659 | 0.9186 |
| 0.2881 | 6.0 | 37362 | 0.2668 | 0.9140 |
| 0.2754 | 7.0 | 43589 | 0.2643 | 0.9128 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
1,357 | moshew/minylm-L3-aug-sst2-distilled | [
"0",
"1"
] | {'test_accuracy': 0.911697247706422,
'test_loss': 0.24090610444545746,
'test_runtime': 0.4372,
'test_samples_per_second': 1994.475,
'test_steps_per_second': 16.011} |
1,358 | moshew/mpnet-base-sst2-distilled | [
"negative",
"positive"
] | {'test_accuracy': 0.9426605504587156,
'test_loss': 0.1693699210882187,
'test_runtime': 1.7713,
'test_samples_per_second': 492.29,
'test_steps_per_second': 3.952} |
1,360 | moussaKam/frugalscore_medium_bert-base_bert-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,361 | moussaKam/frugalscore_medium_bert-base_mover-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,362 | moussaKam/frugalscore_medium_deberta_bert-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,363 | moussaKam/frugalscore_medium_roberta_bert-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,364 | moussaKam/frugalscore_small_bert-base_bert-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,365 | moussaKam/frugalscore_small_bert-base_mover-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,366 | moussaKam/frugalscore_small_deberta_bert-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,367 | moussaKam/frugalscore_small_roberta_bert-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,368 | moussaKam/frugalscore_tiny_bert-base_bert-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,369 | moussaKam/frugalscore_tiny_bert-base_mover-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,370 | moussaKam/frugalscore_tiny_deberta_bert-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,371 | moussaKam/frugalscore_tiny_roberta_bert-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
1,372 | mradau/stress_classifier | [
"Emotional Turmoil",
"Everyday Decision Making",
"Family Issues",
"Financial Problem",
"Health, Fatigue, or Physical Pain",
"Other",
"School",
"Social Relationships",
"Work"
] | ---
tags:
- generated_from_keras_callback
model-index:
- name: tmpacdj0jf1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmpacdj0jf1
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
1,373 | mradau/stress_score | [
"LABEL_0"
] | ---
tags:
- generated_from_keras_callback
model-index:
- name: tmp10l_qol1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmp10l_qol1
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
1,376 | mrm8488/bert-mini-finetuned-age_news-classification | [
"World",
"Sports",
"Business",
"Sci/Tech"
] | ---
language: en
tags:
- news
- classification
- mini
datasets:
- ag_news
widget:
- text: Israel withdraws from Gaza camp Israel withdraws from Khan Younis refugee
camp in the Gaza Strip, after a four-day operation that left 11 dead.
model-index:
- name: mrm8488/bert-mini-finetuned-age_news-classification
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ag_news
type: ag_news
config: default
split: test
metrics:
- type: accuracy
value: 0.9339473684210526
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGMyMDQ2MWFkMjVmNzI1YjJjNGI4MzVmYjI4YjM4NWJhYTE4NjM1YTU5YmFlNjE4OTM1ODUzMTQzYjkzOWFiNCIsInZlcnNpb24iOjF9.2LUwbHZXya2yH5UQmSJgzwad-k00u4woOKWKDbdgYdxBSeAK_5hxql5E6htJyb10xcqrd6fOMHzPNIboC0N_AA
- type: precision
value: 0.9341278202108704
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDQ3MThmNjZkMWUxM2ZkMzk4ZjVkOTIzMDkwN2IwODNkNTMwYjNjZDlhM2VkOWJiMDE0ZGJiZGRhOTJhZTFiNCIsInZlcnNpb24iOjF9.SYSvqyyCO1wX25qDdH7yDa4lB4ZbkPbVDiU7006K-QYG1eQ2ZSaLiYEkPZjnHOP0A1pOKCr1Eu8pyHi61dpeDw
- type: precision
value: 0.9339473684210526
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDdhYThkODVlN2I0ZTBlZmY3ZjBkMGVkZmU0Yjk5NmUzYjA1NWRjZTU1M2FlZDk3NDUyNDhiNjgxZDU2MTcyMyIsInZlcnNpb24iOjF9.GMvGnfrD4KPWlJfbwCSpJ5J-2rhKp9JpULfn9hMA_UIcXAxBvowJ4Jq2TSUACfu0e11ZkHGX9ieInVDaUvaNDA
- type: precision
value: 0.9341278202108704
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWJiNmE1YjhkYmI5ODU5ODI1NjZmOWQ5YTNjMjI0OTZiMWJhNTc5NTkzODc2ZjBmMGQ1M2Y3YTFiOGI4M2Q3ZiIsInZlcnNpb24iOjF9.Rff7RZQBYpghWbMMTOllZD-Hvg0XzHBnu5O-p944wLZDYOdptMQsH5pxMvpHGdMV3ubhywBNmk6LLpWV_SWUBQ
- type: recall
value: 0.9339473684210526
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTgxYThiOWMwNDQ1NTFkZDgwZGQ5YWE0ODg5YjFmNDNkZGQzNTAxYjJkNDE1M2Q1MDlhNWRhZjc0Mjc5MGMwOSIsInZlcnNpb24iOjF9.3zmvO72C0JdwsTuQ0JM4AyQ5uI7nzR7WWi_is9biXl8MpWTzfUnhfID5aMm2Ysbmvq_4LQR-8JhoVGMW-42fDw
- type: recall
value: 0.9339473684210526
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjJmNmQ1NjBmZDg3NzU1NWY5MjNlZTk1ODUxODE2NjNmNDg4NDM0MjFiNmZhYjc2YjhmYTQ5YTNjZjIwNGQ1YSIsInZlcnNpb24iOjF9.KE63WXvmB98lfKxxSfmefCB2rXWhaUWO-YcOMthGoGXMIB4asizAuu-bhOvXaAyFCzGfGKk8OFvB6r6QzE8MCg
- type: recall
value: 0.9339473684210526
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODE4ZTNhNmZkMTQxMmZjYjZmOTI1NGRmM2U3NTE5NDJhZmRkNThlNjVmOTU0ZTIwMDhkYjk1MzYzMDRhY2JkMiIsInZlcnNpb24iOjF9.B4h2dORHGgtsx-2mDbmyOMQzUNaS0WNKoMwI6kGYfQNVp_X0m6XYPI9mxmOJqugVuzctTgY-Ujk24rwUTc2dBg
- type: f1
value: 0.9339351653217216
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTVjMWY4MjQ1ZmRhMWRiZTYwZWVkNjBkMTA3OTJkM2NiNWRkZDg2NWZkZTNlODBmODRlYzQ3NGUzNDBjMjQ5MyIsInZlcnNpb24iOjF9.9EEslDwEPYwqKbKHAi2pwKu75t7Upph7v40ZHiQk-n-PDwvCaYfBtwoamMbKyJhoU7NPcX79FUq8cOvN5t_FBA
- type: f1
value: 0.9339473684210526
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzYxNDkzNjYwODE4NTlkYWI0M2M2MWEwYmE0ZGQxMGJlY2UzYmE5NTBiMjA0ODY0NzI3MzIzOTFmZGJhOTUwZSIsInZlcnNpb24iOjF9.G25vyEwE1dJQYbxyUmWS_aap56kX3O1nSHOdketgGvDEyWeRLWJSfd1LUN14NUp9YvPabgLPSx2X5CKaOvNNCA
- type: f1
value: 0.9339351653217215
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjlmY2ZkZjMyZjE1YjVlMGNkNmJhZTM5ZjFlMzVjMzNkOWRiZWRjZTBmMTk1YjQwZDgwY2YwNTg2YWViMzkxYyIsInZlcnNpb24iOjF9.7S_zA7SiEnaNm3RQzAv8rT0OHuKeG_EB5YpiGJeSlcKDTDyOYoNL5ZQ2wjsUpx5ofAFkdU2u0RNwEu8pgeHjAQ
- type: loss
value: 0.20814141631126404
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmE2OWY4MjU4ZmY1NTgzN2NmNTcwMjU4NWE0ZWE5YTY5M2EyZmJjMzVkZmQ1OGNlNGJiYWFjYmI2ZThhYmU2MyIsInZlcnNpb24iOjF9.WUB0AhSCxHO1Ji7sL92A4UYA7EhJQQTxgZ4lTwm4mdWAYPYxuQ5UhOL0fZpZfsfqtkym8LdcxPozwIvxSsH1AQ
---
# BERT-Mini fine-tuned on age_news dataset for news classification
Test set accuray: 0.93 |
1,382 | mrm8488/deberta-v3-base-goemotions | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_3",
"LABEL_4",
... | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: deberta-v3-base-goemotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-goemotions
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7610
- F1: 0.4468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.5709 | 1.0 | 6164 | 1.5211 | 0.4039 |
| 1.3689 | 2.0 | 12328 | 1.5466 | 0.4198 |
| 1.1819 | 3.0 | 18492 | 1.5670 | 0.4520 |
| 1.0059 | 4.0 | 24656 | 1.6673 | 0.4479 |
| 0.8129 | 5.0 | 30820 | 1.7610 | 0.4468 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
1,383 | mrm8488/deberta-v3-large-finetuned-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
widget:
- text: She was badly wounded already. Another spear would take her down.
model-index:
- name: deberta-v3-large-mnli-2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- type: accuracy
value: 0.8949349064279902
name: Accuracy
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: glue
type: glue
config: mnli
split: validation_matched
metrics:
- type: accuracy
value: 0.9000509424350484
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmU1NTE1YmYwOTA4NmQ3ZWE1MmM0ZDFiNDQ5YWIyMDMyZDhjZWMxYTQ3NGIxOWVkMTQxYTA3MTE2ZTUyYjg0ZiIsInZlcnNpb24iOjF9.UygjleiO4h0rlNa8KJIzJMy2VbMkLF-kB-YowCa_EhLKJQqRr9id5db81MyR_VV3ROrSdHVbCGIM9qxkPRbABg
- type: precision
value: 0.9000452542826349
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2EyMWYxY2ZlNTFhYWRhNjA4MzYxOTI4NDAzMjQwMmI4MTJmMWE3ZWEzZTQwMmMyZTM1MzIxYWEyYzVhNDlmMCIsInZlcnNpb24iOjF9.iq2CgF4ik1_DjPlbmFgxvscryy1NNQjTatCJhDu95sXMdlWkekPS6on3NyEaSDwptKyuTQiF4wh8WZDrfhO_Dw
- type: precision
value: 0.9000509424350484
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmY5NmE1MjU1Yzg3Mzk3MDJiNGUyMzM5NmYxYjljZjY1OTQ3NWE0MWM2MTZhYjQ4ZWFmY2FkODc4OThkMzIxMCIsInZlcnNpb24iOjF9.yN_8lq_IjeLU1WJknAkoj75MQajxLvsIsf_pOPFT0_Q77Vfhu0AsIdy1WDJcsAw08ziJoNpN_2LGDMBYJmwzCQ
- type: precision
value: 0.9014585350976404
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTBkYWM4YTE3N2Q5ZmY5ZTRiMGQ1MDc5ODk2NjQwZDc0ODNkMjk3MjdjMjRlZDU2Yzk1MTliMzhmNjYzYzY2ZCIsInZlcnNpb24iOjF9.f9_fAM_a9LwSBwFgwaO5rdAYzV3wkhHq6yquugL1djRlbISZdpzZFWfJHcS-fvgMayYsklBK_ezbu0f7u7tyDg
- type: recall
value: 0.900253092056111
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTkwZTRmYzhjNDMyMDllNzFiYTNkMDdjN2E2NmEzOTdjMzAxNjdmMzg3OTFmN2IwZTlmYWY5MWQyMDUyNWRlMSIsInZlcnNpb24iOjF9.aWtX33vOHaGpePRZwO9dfTfWoWyXYCVAf8W1AlHXZto6Ve2HX9RLISTsALRMfNzX-7B6LYLh6qzusjf2xQ20Bw
- type: recall
value: 0.9000509424350484
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFhYzVlZjQ3M2YyYjY1NTBiMGI4NmI4MTgwY2QzY2I3YmMyNjc3YmFhMDU1ZjNlY2FkMjQxOTg3YWYyYTU3ZiIsInZlcnNpb24iOjF9.wPD0-SL1vdG3_bi7cKh_hgVwVr1yV6zRYBzpGe6bDEzV5BYb5lCQoAebS5U1o2H4E4qi7zr2YNFEToNCRTqPBA
- type: recall
value: 0.9000509424350484
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNThmNjQ4MDY2ZTM3NjQyODQzMTZkNjgyMGNkNDE5MDMwOWJmMzhjZmZjNzllYjA4NmJiZDU3MzU3ODE0YjFhMyIsInZlcnNpb24iOjF9.yN9hb5VWX5ICIXdPBc0OD0BFHnzWv8rmmD--OEh6h1agGiRiyCdROo4saN5CQKiVlPBsHPliaoXra45Xi4gVAg
- type: f1
value: 0.8997940135019421
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmQzMWZhZTg1ODBmNWFiMGJiZDE5ODA2ZTA3NmUwZDcxMTQ1NzZjNDFiZDZkN2RmMmQ3YzRiMmI2Y2Q3MWRlNiIsInZlcnNpb24iOjF9.lr6jUSxXu6zKs_x-UQT7dL9_PzKTf50KUu7spTzRI6_SkaUyl9Ez0gR-O8bfzypaqkdxvtf7dsNFskpUvJ8wDQ
- type: f1
value: 0.9000509424350484
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWFiZjAzYjQ4NjFjMThjM2RlOGU1YzRjMmQzZTNhMDVjYWE3Njg5Y2QwMzc4YzY0ODNjOWUwMDJiNGU4ODk2MyIsInZlcnNpb24iOjF9.BsWoM2Mb4Kx5Lzm7b9GstHNuxGX7emrFNRcepgYNhjkeEhj3yJbvbboOaJuWMc9TdJEPr3o1PuNiu7zQ_vy_DQ
- type: f1
value: 0.9003949466748086
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQ1NjA2Njc0Njk2YzY0MzIwYTYwMWM5MTZhNzhhZDY2ODgyYzVlODlmN2Q2MjRjNzhhNzMyZDQ1ZmYwMjdlMyIsInZlcnNpb24iOjF9.Xdl4G3GaOXzCRhaoDf_sJThoEQLmlGyf4efJCYFKXCe1DfNb4qOl-_h9LuE3-iacvusjIJFIquhQ7YsLtqbrCg
- type: loss
value: 0.6493226289749146
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWU0ZGM5MWE2Mjk3NDI5ZGNkZmFhM2IxYmFiZjVkMjdiNTE4NzA5YWMxNDcxOWYxYjA2MmQ3ZmE1Yzk5M2E2OCIsInZlcnNpb24iOjF9.gsO8l1_9H89OaztnG6rhNuOY-ssmafoUSwuyNRPR5TjqwrimWk4S6k2uCSSoV9h_JvtliFQ94aZhgSB2lGxWCg
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-v3-large fine-tuned on MNLI
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6763
- Accuracy: 0.8949
## Model description
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543).
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates.
The DeBERTa V3 large model comes with 24 layers and a hidden size of 1024. It has 304M backbone parameters with a vocabulary containing 128K tokens which introduces 131M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.3676 | 1.0 | 24544 | 0.3761 | 0.8681 |
| 0.2782 | 2.0 | 49088 | 0.3605 | 0.8881 |
| 0.1986 | 3.0 | 73632 | 0.4672 | 0.8894 |
| 0.1299 | 4.0 | 98176 | 0.5248 | 0.8967 |
| 0.0643 | 5.0 | 122720 | 0.6489 | 0.8999 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,384 | mrm8488/deberta-v3-small-finetuned-cola | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
widget:
- text: They represented seriously to the dean Mary as a genuine linguist.
model-index:
- name: deberta-v3-small
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.6333205721749096
name: Matthews Correlation
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: cola
split: validation
metrics:
- type: accuracy
value: 0.8494726749760306
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjJjOTM0MTEzMzBlZWJlMWYwNzgzZmI3M2NiZWVjMDQ5ZDA1MWY0NGY3NjU1NTlmZWE3N2JjZWEzODE0ZTNkNSIsInZlcnNpb24iOjF9.Kt-3jnDTp3-Te5zMHVgG_5hpB5UMCkAMP7fmjx46QDWJfFHpyRgBlf-qz_fw5saFPAQ5G6QNq3bjEJ6mY2lhAw
- type: precision
value: 0.8455882352941176
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODAxMzNkZGEwNGNmYjk4NWRhZDk4OWE4MzA5Y2NiNjQyNTdkOWRmYjU0ZjY0YzQzYmE4ZmI3MjQ4OTk4OWIwNCIsInZlcnNpb24iOjF9.YBFnePtD5-HX15aST39xpPLroFYBgqEn5iLyVaClh62j0M7HQbB8aaGEbgaTIUIr-qz12gVfIQ7UZZIHxby_BQ
- type: recall
value: 0.957004160887656
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjRjMTVhN2E4YjNlOWY2MWRhODZiM2FhZDVjNzYwMjIyNWUyYTMxMWFlZjkwNzVhYjNmMjQxYjk2MTFmMzYyYiIsInZlcnNpb24iOjF9.40GYlU9Do74Y_gLmbIKR2WM8okz5fm-QUwJAsoIyM1UtQ71lKd-FV5Yr9CdAh3fyQYa3SMYe6tm9OByNMMw_AA
- type: auc
value: 0.9167413271767129
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzVjYmMyZDkyMzM0ZTQ1MTk0ZmY4MWUwZmIxMGRlOWMyMjJmNDRiZGNkMGZlZDZmY2I5OWI2NDYzMGQ2YzhiNSIsInZlcnNpb24iOjF9.setZF_g9x-aknFXM1k0NxrOWMJcmpNi6z7QlyfL0i6fTPJOj6SbKJ1WQb3J1zTuabgx9cOc5xgHtBH3IA7fkDQ
- type: f1
value: 0.8978529603122967
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmQ1NmNiMDhmNTU2Y2UxMzU0ODRmYmZmZTFkYjI4MzczMWUwYWQ4OTk2NGJlY2MzNmViYTA4MTRkODJhMTU1MyIsInZlcnNpb24iOjF9.GUIRxsYKgjYK63JS2rd9vCLHHmCiB4H68Xo5GxMaITfyzcUcdNc6l62njmQGrOoUidlTt1F7DzGP2Cu_Gz8HDg
- type: loss
value: 0.4050811529159546
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjBjNjg0OTFjOTc5Mzc2MWQ1ZDIyYmM5MmIzZDVlY2JjYzBlZjMyN2IwOWU4YzNlMDcwZmM0NTMxYjExY2I0MiIsInZlcnNpb24iOjF9.xayLZc97iUW0zNqG65TiW9BXoqzV-tqF8g9qGCYQ1ZGuSDSjLlK7Y4og7-wqPEiME8JtNyVxl6-ZcWnF1t8cDg
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-v3-small fine-tuned on CoLA
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4051
- Matthews Correlation: 0.6333
## Model description
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
The DeBERTa V3 small model comes with 6 layers and a hidden size of 768. Its total parameter number is 143M since we use a vocabulary containing 128K tokens which introduce 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
## Intended uses & limitations
More information needed
## Training and evaluation data
The Corpus of Linguistic Acceptability (CoLA) in its full form consists of 10657 sentences from 23 linguistics publications, expertly annotated for acceptability (grammaticality) by their original authors. The public version provided here contains 9594 sentences belonging to training and development sets, and excludes 1063 sentences belonging to a held out test set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 535 | 0.4051 | 0.6333 |
| 0.3371 | 2.0 | 1070 | 0.4455 | 0.6531 |
| 0.3371 | 3.0 | 1605 | 0.5755 | 0.6499 |
| 0.1305 | 4.0 | 2140 | 0.7188 | 0.6553 |
| 0.1305 | 5.0 | 2675 | 0.8047 | 0.6700 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,385 | mrm8488/deberta-v3-small-finetuned-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
- deberta-v3
datasets:
- glue
metrics:
- accuracy
model-index:
- name: ds_results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.874593165174939
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa v3 (small) fine-tuned on MNLI
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4985
- Accuracy: 0.8746
## Model description
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
The DeBERTa V3 small model comes with 6 layers and a hidden size of 768. Its total parameter number is 143M since we use a vocabulary containing 128K tokens which introduce 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
## Intended uses & limitations
More information needed
## Training and evaluation data
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7773 | 0.04 | 1000 | 0.5241 | 0.7984 |
| 0.546 | 0.08 | 2000 | 0.4629 | 0.8194 |
| 0.5032 | 0.12 | 3000 | 0.4704 | 0.8274 |
| 0.4711 | 0.16 | 4000 | 0.4383 | 0.8355 |
| 0.473 | 0.2 | 5000 | 0.4652 | 0.8305 |
| 0.4619 | 0.24 | 6000 | 0.4234 | 0.8386 |
| 0.4542 | 0.29 | 7000 | 0.4825 | 0.8349 |
| 0.4468 | 0.33 | 8000 | 0.3985 | 0.8513 |
| 0.4288 | 0.37 | 9000 | 0.4084 | 0.8493 |
| 0.4354 | 0.41 | 10000 | 0.3850 | 0.8533 |
| 0.423 | 0.45 | 11000 | 0.3855 | 0.8509 |
| 0.4167 | 0.49 | 12000 | 0.4122 | 0.8513 |
| 0.4129 | 0.53 | 13000 | 0.4009 | 0.8550 |
| 0.4135 | 0.57 | 14000 | 0.4136 | 0.8544 |
| 0.4074 | 0.61 | 15000 | 0.3869 | 0.8595 |
| 0.415 | 0.65 | 16000 | 0.3911 | 0.8517 |
| 0.4095 | 0.69 | 17000 | 0.3880 | 0.8593 |
| 0.4001 | 0.73 | 18000 | 0.3907 | 0.8587 |
| 0.4069 | 0.77 | 19000 | 0.3686 | 0.8630 |
| 0.3927 | 0.81 | 20000 | 0.4008 | 0.8593 |
| 0.3958 | 0.86 | 21000 | 0.3716 | 0.8639 |
| 0.4016 | 0.9 | 22000 | 0.3594 | 0.8679 |
| 0.3945 | 0.94 | 23000 | 0.3595 | 0.8679 |
| 0.3932 | 0.98 | 24000 | 0.3577 | 0.8645 |
| 0.345 | 1.02 | 25000 | 0.4080 | 0.8699 |
| 0.2885 | 1.06 | 26000 | 0.3919 | 0.8674 |
| 0.2858 | 1.1 | 27000 | 0.4346 | 0.8651 |
| 0.2872 | 1.14 | 28000 | 0.4105 | 0.8674 |
| 0.3002 | 1.18 | 29000 | 0.4133 | 0.8708 |
| 0.2954 | 1.22 | 30000 | 0.4062 | 0.8667 |
| 0.2912 | 1.26 | 31000 | 0.3972 | 0.8708 |
| 0.2958 | 1.3 | 32000 | 0.3713 | 0.8732 |
| 0.293 | 1.34 | 33000 | 0.3717 | 0.8715 |
| 0.3001 | 1.39 | 34000 | 0.3826 | 0.8716 |
| 0.2864 | 1.43 | 35000 | 0.4155 | 0.8694 |
| 0.2827 | 1.47 | 36000 | 0.4224 | 0.8666 |
| 0.2836 | 1.51 | 37000 | 0.3832 | 0.8744 |
| 0.2844 | 1.55 | 38000 | 0.4179 | 0.8699 |
| 0.2866 | 1.59 | 39000 | 0.3969 | 0.8681 |
| 0.2883 | 1.63 | 40000 | 0.4000 | 0.8683 |
| 0.2832 | 1.67 | 41000 | 0.3853 | 0.8688 |
| 0.2876 | 1.71 | 42000 | 0.3924 | 0.8677 |
| 0.2855 | 1.75 | 43000 | 0.4177 | 0.8719 |
| 0.2845 | 1.79 | 44000 | 0.3877 | 0.8724 |
| 0.2882 | 1.83 | 45000 | 0.3961 | 0.8713 |
| 0.2773 | 1.87 | 46000 | 0.3791 | 0.8740 |
| 0.2767 | 1.91 | 47000 | 0.3877 | 0.8779 |
| 0.2772 | 1.96 | 48000 | 0.4022 | 0.8690 |
| 0.2816 | 2.0 | 49000 | 0.3837 | 0.8732 |
| 0.2068 | 2.04 | 50000 | 0.4644 | 0.8720 |
| 0.1914 | 2.08 | 51000 | 0.4919 | 0.8744 |
| 0.2 | 2.12 | 52000 | 0.4870 | 0.8702 |
| 0.1904 | 2.16 | 53000 | 0.5038 | 0.8737 |
| 0.1915 | 2.2 | 54000 | 0.5232 | 0.8711 |
| 0.1956 | 2.24 | 55000 | 0.5192 | 0.8747 |
| 0.1911 | 2.28 | 56000 | 0.5215 | 0.8761 |
| 0.2053 | 2.32 | 57000 | 0.4604 | 0.8738 |
| 0.2008 | 2.36 | 58000 | 0.5162 | 0.8715 |
| 0.1971 | 2.4 | 59000 | 0.4886 | 0.8754 |
| 0.192 | 2.44 | 60000 | 0.4921 | 0.8725 |
| 0.1937 | 2.49 | 61000 | 0.4917 | 0.8763 |
| 0.1931 | 2.53 | 62000 | 0.4789 | 0.8778 |
| 0.1964 | 2.57 | 63000 | 0.4997 | 0.8721 |
| 0.2008 | 2.61 | 64000 | 0.4748 | 0.8756 |
| 0.1962 | 2.65 | 65000 | 0.4840 | 0.8764 |
| 0.2029 | 2.69 | 66000 | 0.4889 | 0.8767 |
| 0.1927 | 2.73 | 67000 | 0.4820 | 0.8758 |
| 0.1926 | 2.77 | 68000 | 0.4857 | 0.8762 |
| 0.1919 | 2.81 | 69000 | 0.4836 | 0.8749 |
| 0.1911 | 2.85 | 70000 | 0.4859 | 0.8742 |
| 0.1897 | 2.89 | 71000 | 0.4853 | 0.8766 |
| 0.186 | 2.93 | 72000 | 0.4946 | 0.8768 |
| 0.2011 | 2.97 | 73000 | 0.4851 | 0.8767 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
1,386 | mrm8488/deberta-v3-small-finetuned-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
- deberta-v3
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-small
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.8921568627450981
name: Accuracy
- type: f1
value: 0.9233449477351917
name: F1
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: glue
type: glue
config: mrpc
split: validation
metrics:
- type: accuracy
value: 0.8921568627450981
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmQ3MjM2NTJiZjJmM2UxYTlmMDczOTQ2MjY4OTdhZTAyM2RiMTc2YjZiNWIwZDk1ZGUxMjgzMDBiOWVjZTQ4OCIsInZlcnNpb24iOjF9.yerN7Izy0yT3ykyO3t5Mr-TO3oxpTMfijCWJKnA_XO_rt81LP3-9qbqknXur6ahHqKN-1BLtr_fmAu0-IPQyDA
- type: precision
value: 0.8983050847457628
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWQxODVmYTM4OThlMjNhY2MzZTBhMWJmMmNjMDMyYjYyNzc4NWI3YzJjZDkzMTcyOWEwN2IxOWYyOGQ5NTY5MSIsInZlcnNpb24iOjF9.cfqvd8wnSqhHj5fKlIb6JN9He8ooAu94tFJytw2I93qqGSVvaTktM0Ib_DqPuHYneGY1DGbgb6Nsl90DiZSMCQ
- type: recall
value: 0.9498207885304659
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjg3Y2Y1NGY0NTRjMWFhYTAxMWYxMTcxNWM2ZDU5NGY1ZTk3OTJmZWQyYmIzMGJiZWQ0YWQ2MjNhOGU2MGU0ZCIsInZlcnNpb24iOjF9.jj7VNaWQU3u3tnngqCixlfkwF8h6ykzvHm4tgezJe1pacAU0Tsugn7IPvAJTrvNE0sU8_Q7dm-C_UKQGzmlIBw
- type: auc
value: 0.9516129032258065
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhkOTQ0ZmVlYTYwNTdjY2IxYTM5ZThhYzgzZWMxMGQzMThmZDkwNTcyMWZiNzg4Y2I3NjZhMzVjYmNmN2FlZiIsInZlcnNpb24iOjF9.28hOJFgnyNHXMpaFbNTEcolUcuNVqrXNSuT6hTs2vrjlAIWVnzxUfaHjH2kVYh1-sOSNSE9maetd1CtQ7i78CQ
- type: f1
value: 0.9233449477351917
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmY0ZWE5Y2Q5YmZlOWM4OTU0OGIwOWEwNDk3MTlkYTY5YzgwMjQwNDFjYWU4ZDdmZWY4Nzc0MzQzMTM2YTRhYyIsInZlcnNpb24iOjF9.NymiR2fVXaI6ytAGZFM8HuQLxTJlxuUsWziVNaauyuJ9xfOLOGVJ6VI_H7CoBwc-pZKbKiQOvtfpOGwt1J22CA
- type: loss
value: 0.2787226438522339
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGNhMDgyMGI3ZWI4NDVkYzM0NjE1ZTk0YjczYzU4NmRhOGYxM2RlMjU3YThhY2QzNmU3NmJhM2IzMWI5MDMwNyIsInZlcnNpb24iOjF9.HFdpBkvu0671KUgkOtpSgeGBr3wU7g51zVt3-wEwVWhS4hMX4oPFAqF4JBxFx3mgbGjTDiRQ2xiA5lm0UnkdCg
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa v3 (small) fine-tuned on MRPC
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2787
- Accuracy: 0.8922
- F1: 0.9233
- Combined Score: 0.9078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| No log | 1.0 | 230 | 0.2787 | 0.8922 | 0.9233 | 0.9078 |
| No log | 2.0 | 460 | 0.3651 | 0.875 | 0.9137 | 0.8944 |
| No log | 3.0 | 690 | 0.5238 | 0.8799 | 0.9179 | 0.8989 |
| No log | 4.0 | 920 | 0.4712 | 0.8946 | 0.9222 | 0.9084 |
| 0.2147 | 5.0 | 1150 | 0.5704 | 0.8946 | 0.9262 | 0.9104 |
| 0.2147 | 6.0 | 1380 | 0.5697 | 0.8995 | 0.9284 | 0.9140 |
| 0.2147 | 7.0 | 1610 | 0.6651 | 0.8922 | 0.9214 | 0.9068 |
| 0.2147 | 8.0 | 1840 | 0.6726 | 0.8946 | 0.9239 | 0.9093 |
| 0.0183 | 9.0 | 2070 | 0.7250 | 0.8848 | 0.9177 | 0.9012 |
| 0.0183 | 10.0 | 2300 | 0.7093 | 0.8922 | 0.9223 | 0.9072 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
1,387 | mrm8488/deberta-v3-small-finetuned-qnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-v3-small
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- type: accuracy
value: 0.9150649826102873
name: Accuracy
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: glue
type: glue
config: qnli
split: validation
metrics:
- type: accuracy
value: 0.914881933003844
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDY2NmRlOTEyMzkwMjc5MjVjZDY3MTczMmM2ZTEyZTFiMTk1YmJiYjkxYmYyYTAzNDlhOTU5OTMzZjhhMjkyMSIsInZlcnNpb24iOjF9.aoHEeaQLKI4uwmTgp8Lo9zRoParcSlyDiXZiRrWTqZJIMHgwKgQg52zvYYrZ9HMjjIvWjdW9G_s_DfxqBoekDA
- type: precision
value: 0.9195906432748538
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGMyMjUyNTliOWZjMzkzM2Y3YWU0ODhiNDcyOTAwZjYyZjRiNGQ5NTgyODM4Y2VjNGRlYzNkNTViNmJhNzM0ZSIsInZlcnNpb24iOjF9.fJdQ7M46RGvp_uXk9jvBpl0RFAIGTRAtk8bRQGjNn_uy5weBm6tENL-OclZHwG4uU6LviGTdXmAwn5Ba37hNBw
- type: recall
value: 0.9112640347700108
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2Y2ZmIyZTMzMzM1MTc1OWQ0YWI2ZjU2MzQ5NGU1M2FjNDRiOWViM2NkNWU2M2UzZjljMDJjNmUzZTQ1YWM2MiIsInZlcnNpb24iOjF9.6kVxEkJ-Fojy9HgMevsHovimj3IYp97WO2991zQOFN8nEpPc0hThFk5kMRotS-jPSLFh0mS2PVhQ5x3HIo17Ag
- type: auc
value: 0.9718281171793548
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmZiMGU3MzVjMWNlOTViNmZlYmZjZDRmMzI4OGI4NzAxN2Y5OTE2YmVlMzEzY2ZmODBlODQ1ZjA5MTlhNmEzYyIsInZlcnNpb24iOjF9.byBFlu-eyAmwGQ_tkVi3zaSklTY4G6qenYu1b6hNvYlfPeCuBtVA6qJNF_DI4QWZyEBtdICIyYHzTUHGcAFUBg
- type: f1
value: 0.9154084045843187
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDdmZjk4MzRkMzgyMDY0MjZjZTZiYWNiMTE5MjBiMTBhYWQyYjVjYzk5Mzc1NzQxMGFkMzk4NDUzMjg1YmYzMCIsInZlcnNpb24iOjF9.zYUMpTtIHycUUa5ftwz3hjFb8xk0V5LaUbCDA679Q1BZtXZrEaXtSjbJNKiLBQip1gIwYC1aADcfgSELoBG8AA
- type: loss
value: 0.21421395242214203
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGM1YjNiNWFmYzQ3NDJiZTlhNDZiNWIxMjc3M2I1OWJlYzkzYWJkNzVkZDdiNWY4YjNiZDM0NzYxZjQ1OGQ4NSIsInZlcnNpb24iOjF9.qI91L1kE_ZjSOktpGx3OolCkHZuP0isPgKy2EC-YB_M3LEDym4APHVUjhwCgYFCu3-LcVH8syQ7SmI4mrovDAw
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-v3-small fine-tuned on QNLI
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2143
- Accuracy: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2823 | 1.0 | 6547 | 0.2143 | 0.9151 |
| 0.1996 | 2.0 | 13094 | 0.2760 | 0.9103 |
| 0.1327 | 3.0 | 19641 | 0.3293 | 0.9169 |
| 0.0811 | 4.0 | 26188 | 0.4278 | 0.9193 |
| 0.05 | 5.0 | 32735 | 0.5110 | 0.9176 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,388 | mrm8488/deberta-v3-small-finetuned-sst2 | [
"negative",
"positive"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
- deberta-v3
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-v3-small
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- type: accuracy
value: 0.9403669724770642
name: Accuracy
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
- type: accuracy
value: 0.9403669724770642
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2MyOTE4ZTk0YzUyNGFkMGVjNTk4MDBlZGRlZjgzOGIzYWY0YjExMmZmMDZkYjFmOTlkYmM2ZDEwYjMxM2JkOCIsInZlcnNpb24iOjF9.Ks2vdjAFUe0isZp4F-OFK9HzvPqeU3mJEG_XJfOvkTdm9DyaefT9x78sof8i_EbIync5Ao7NOC4STCTQIUvgBw
- type: precision
value: 0.9375
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzNiZTEwNGNlZWUwZjMxYmRjNWU0ZGQ1Njg1M2MwNTQ3YWEwN2JlNDk4OWQ4MzNkMmNhOGUwMzA0YWU3ZWZjMiIsInZlcnNpb24iOjF9.p5Gbs680U45zHoWH9YgRLmOxINR4emvc2yNe9Kt3-y_WyyCd6CAAK9ht-IyGJ7GSO5WQny-ISngJFtyFt5NqDQ
- type: recall
value: 0.9459459459459459
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjk2MmJjMDZlZDUzM2QzMWZhMzMxNWRkYjJlYzA3MjUwMThiYWMwNmQzODE1MTMxNTdkNWVmMDhhNzJjMjg3MyIsInZlcnNpb24iOjF9.Jeu6tyhXQxMykqqFH0V-IXvyTrxAsgnYByYCOJgfj86957G5LiGdfQzDtTuGkt0XcoenXhPuueT8m5tsuJyLBA
- type: auc
value: 0.9804217184474193
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2Q5MWU1MGMzMjEwNzY4MDkzN2Q5ZjM5MTQ2MDc5YTRkZTNmNTk2YTdhODI1ZGJlOTlkNTQ2M2Q4YTUxN2Y3OSIsInZlcnNpb24iOjF9.INkDvQhg2jfD7WEE4qHJazPYo10O4Ffc5AZz5vI8fmN01rK3sXzzydvmrmTMzYSSmLhn9sc1-ZkoWbcv81oqBA
- type: f1
value: 0.9417040358744394
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWRhNjljZjk0NjY1ZjU1ZjU2ZmM5ODk1YTVkMTI0ZGY4MjI1OTFlZWJkZWMyMGYxY2I1MzRjODBkNGVlMzJkZSIsInZlcnNpb24iOjF9.kQ547NVFUxeE4vNiGzGsCvMxR1MCJTChX44ds27qQ4Rj2m1UuD2C9TLTuiu8KMvq1mH1io978dJEpOCHYq6KCQ
- type: loss
value: 0.21338027715682983
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2YyYmVhNzgxMzMyNjJiNzZkYjE1YWM5Y2ZmMTlkNjQ5MThhYjIxNTE5MmE3Y2E0ODllODMyYjAzYWI3ZWRlMSIsInZlcnNpb24iOjF9.ad9rLnOeJZbRi_QQKEBpNNBp_Bt5SHf39ZeWQOZxp7tAK9dc0OK8XOqtihoXcAWDahwuoGiiYtcFNtvueaX6DA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa v3 (small) fine-tuned on SST2
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2134
- Accuracy: 0.9404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.176 | 1.0 | 4210 | 0.2134 | 0.9404 |
| 0.1254 | 2.0 | 8420 | 0.2362 | 0.9415 |
| 0.0957 | 3.0 | 12630 | 0.3187 | 0.9335 |
| 0.0673 | 4.0 | 16840 | 0.3039 | 0.9266 |
| 0.0457 | 5.0 | 21050 | 0.3521 | 0.9312 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
1,389 | mrm8488/deberta-v3-small-goemotions | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_3",
"LABEL_4",
... | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: deberta-v3-snall-goemotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-snall-goemotions
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5638
- F1: 0.4241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.614 | 1.0 | 3082 | 1.5577 | 0.3663 |
| 1.4338 | 2.0 | 6164 | 1.5580 | 0.4084 |
| 1.2936 | 3.0 | 9246 | 1.5006 | 0.4179 |
| 1.1531 | 4.0 | 12328 | 1.5348 | 0.4276 |
| 1.0536 | 5.0 | 15410 | 1.5638 | 0.4241 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
1,391 | mrm8488/distilroberta-finetuned-age_news-classification | [
"World",
"Sports",
"Business",
"Sci/Tech"
] | ---
language: en
tags:
- news
- classification
datasets:
- ag_news
widget:
- text: "Venezuela Prepares for Chavez Recall Vote Supporters and rivals warn of possible fraud; government says Chavez's defeat could produce turmoil in world oil market."
---
# distilroberta-base fine-tuned on age_news dataset for news classification
Test set accuray: 0.94 |
1,392 | mrm8488/distilroberta-finetuned-banking77 | [
"activate_my_card",
"age_limit",
"apple_pay_or_google_pay",
"atm_support",
"automatic_top_up",
"balance_not_updated_after_bank_transfer",
"balance_not_updated_after_cheque_or_cash_deposit",
"beneficiary_not_allowed",
"cancel_transfer",
"card_about_to_expire",
"card_acceptance",
"card_arrival",... | ---
language: en
tags:
- banking
- intent
- multiclass
datasets:
- banking77
widget:
- text: "How long until my transfer goes through?"
---
# distilroberta-base fine-tuned on banking77 dataset for intent classification
Test set accuray: 0.896
## How to use
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
ckpt = 'mrm8488/distilroberta-finetuned-banking77'
tokenizer = AutoTokenizer.from_pretrained(ckpt)
model = AutoModelForSequenceClassification.from_pretrained(ckpt)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier('What is the base of the exchange rates?')
# Output: [{'label': 'exchange_rate', 'score': 0.8509947657585144}]
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain |
1,393 | mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis | [
"negative",
"neutral",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
- financial
- stocks
- sentiment
widget:
- text: "Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 ."
datasets:
- financial_phrasebank
metrics:
- accuracy
model-index:
- name: distilRoberta-financial-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_allagree
metrics:
- name: Accuracy
type: accuracy
value: 0.9823008849557522
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilRoberta-financial-sentiment
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1116
- Accuracy: 0.9823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 255 | 0.1670 | 0.9646 |
| 0.209 | 2.0 | 510 | 0.2290 | 0.9558 |
| 0.209 | 3.0 | 765 | 0.2044 | 0.9558 |
| 0.0326 | 4.0 | 1020 | 0.1116 | 0.9823 |
| 0.0326 | 5.0 | 1275 | 0.1127 | 0.9779 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,396 | mrm8488/electricidad-base-finetuned-muchocine | [
"1",
"2",
"3",
"4",
"5"
] | ---
language: es
datasets:
- muchocine
widget:
- text: "Una buena película, sin más."
tags:
- sentiment
- analysis
- spanish
---
# Electricidad-base fine-tuned for (Spanish) Sentiment Anlalysis 🎞️👍👎
[Electricidad](https://huggingface.co/mrm8488/electricidad-base-discriminator) base fine-tuned on [muchocine](https://huggingface.co/datasets/muchocine) dataset for Spanish **Sentiment Analysis** downstream task.
## Fast usage with `pipelines` 🚀
```python
# pip install -q transformers
from transformers import AutoModelForSequenceClassification, AutoTokenizer
CHKPT = 'mrm8488/electricidad-base-finetuned-muchocine'
model = AutoModelForSequenceClassification.from_pretrained(CHKPT)
tokenizer = AutoTokenizer.from_pretrained(CHKPT)
from transformers import pipeline
classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
# It ranks your comments between 1 and 5 (stars)
classifier('Es una obra mestra. Brillante.')
# [{'label': '5', 'score': 0.9498381614685059}]
classifier('Es una película muy buena.')
# {'label': '4', 'score': 0.9277070760726929}]
classifier('Una buena película, sin más.')
# [{'label': '3', 'score': 0.9768431782722473}]
classifier('Esperaba mucho más.')
# [{'label': '2', 'score': 0.7063605189323425}]
classifier('He tirado el dinero. Una basura. Vergonzoso.')
# [{'label': '1', 'score': 0.8494752049446106}]
``` |
1,399 | mrm8488/electricidad-small-finetuned-muchocine | [
"⭐",
"⭐ ⭐",
"⭐ ⭐ ⭐",
"⭐ ⭐ ⭐ ⭐",
"⭐ ⭐ ⭐ ⭐ ⭐"
] | ---
language: es
datasets:
- muchocine
widget:
- text: "Una buena película, sin más."
tags:
- sentiment
- analysis
- spanish
---
# Electricidad-small fine-tuned for (Spanish) Sentiment Anlalysis 🎞️👍👎
[Electricidad](https://huggingface.co/mrm8488/electricidad-small-discriminator) small fine-tuned on [muchocine](https://huggingface.co/datasets/muchocine) dataset for Spanish **Sentiment Analysis** downstream task.
## Fast usage with `pipelines` 🚀
```python
# pip install -q transformers
from transformers import AutoModelForSequenceClassification, AutoTokenizer
CHKPT = 'mrm8488/electricidad-small-finetuned-muchocine'
model = AutoModelForSequenceClassification.from_pretrained(CHKPT)
tokenizer = AutoTokenizer.from_pretrained(CHKPT)
from transformers import pipeline
classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
# It ranks your comments between 1 and 5 (stars)
classifier('Es una obra mestra. Brillante.')
classifier('Es una película muy buena.')
classifier('Una buena película, sin más.')
classifier('Esperaba mucho más.')
classifier('He tirado el dinero. Una basura. Vergonzoso.')
``` |
1,401 | mrm8488/electricidad-small-finetuned-xnli-es | [
"entailment",
"neutral",
"contradiction"
] | ---
language: es
tags:
- spanish
- nli
- xnli
datasets:
- xnli
license: mit
widget:
- text: "Por favor, no piensen en darnos dinero. Por favor, considere piadosamente cuanto puede dar."
---
# electricidad-small-finetuned-xnli-es
|
1,404 | msavel-prnt/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metric:
name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7528
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.3044 | 0.7623 |
| 3.7959 | 2.0 | 636 | 1.8674 | 0.8597 |
| 3.7959 | 3.0 | 954 | 1.1377 | 0.8948 |
| 1.6819 | 4.0 | 1272 | 0.8351 | 0.9126 |
| 0.8804 | 5.0 | 1590 | 0.7528 | 0.9181 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
1,405 | mschwab/va_bert_classification | [
"VA",
"no VA"
] | ---
language:
- en
tags:
- sentence classification
- vossian antonomasia
license: "apache-2.0"
datasets:
- custom
widget:
- text: Bijan wants Jordan to be the Elizabeth Taylor of men's fragrances.
metrics:
- f1
- precision
- recall
---
## English Vossian Antonomasia Sentence Classifier
This page presents a fine-tuned [BERT-base-cased](https://huggingface.co/bert-base-cased) language model for classifying sentences that include Vossian Antonomasia.
The label "VA" corresponds to the occurrence of a Vossian Antonomasia in the sentence.
### Dataset
The dataset is a labeled Vossian Antonomasia dataset that evolved from [Schwab et al. 2019](https://www.aclweb.org/anthology/D19-1647.pdf) and was updated in [Schwab et al. 2022](https://doi.org/10.3389/frai.2022.868249).
### Results
F1 score: 0.974
For more results, please have a look at [our paper](https://doi.org/10.3389/frai.2022.868249).
---
### Cite
Please cite the following paper when using this model.
```
@article{schwab2022rodney,
title={“The Rodney Dangerfield of Stylistic Devices”: End-to-End Detection and Extraction of Vossian Antonomasia Using Neural Networks},
author={Schwab, Michel and J{\"a}schke, Robert and Fischer, Frank},
journal={Frontiers in Artificial Intelligence},
volume={5},
year={2022},
publisher={Frontiers Media SA}
}
```
---
### Interested in more?
Visit our [Website](http://vossanto.weltliteratur.net/) for more research on Vossian Antonomasia, including interactive visualizations for exploration. |
1,406 | muhtasham/autonlp-Doctor_DE-24595544 | [
"target"
] | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP 🤗"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 92.87363201770962
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595544
- CO2 Emissions (in grams): 92.87363201770962
## Validation Metrics
- Loss: 0.3001164197921753
- MSE: 0.3001164197921753
- MAE: 0.24272102117538452
- R2: 0.8465975006681247
- RMSE: 0.5478288531303406
- Explained Variance: 0.8468209505081177
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595544
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595544", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595544", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,407 | muhtasham/autonlp-Doctor_DE-24595545 | [
"target"
] | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP 🤗"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 203.30658367993382
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595545
- CO2 Emissions (in grams): 203.30658367993382
## Validation Metrics
- Loss: 0.30214861035346985
- MSE: 0.30214861035346985
- MAE: 0.25911855697631836
- R2: 0.8455587614373526
- RMSE: 0.5496804714202881
- Explained Variance: 0.8476610779762268
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595545
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595545", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595545", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,408 | muhtasham/autonlp-Doctor_DE-24595546 | [
"target"
] | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP 🤗"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 210.5957437893554
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595546
- CO2 Emissions (in grams): 210.5957437893554
## Validation Metrics
- Loss: 0.3092539310455322
- MSE: 0.30925390124320984
- MAE: 0.25015318393707275
- R2: 0.841926941198094
- RMSE: 0.5561060309410095
- Explained Variance: 0.8427215218544006
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595546
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595546", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595546", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,409 | muhtasham/autonlp-Doctor_DE-24595547 | [
"target"
] | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP 🤗"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 396.5529429198159
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595547
- CO2 Emissions (in grams): 396.5529429198159
## Validation Metrics
- Loss: 1.9565489292144775
- MSE: 1.9565489292144775
- MAE: 0.9890901446342468
- R2: -7.68965036332947e-05
- RMSE: 1.3987668752670288
- Explained Variance: 0.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595547
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595547", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595547", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,410 | muhtasham/autonlp-Doctor_DE-24595548 | [
"target"
] | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP 🤗"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 183.88911013564527
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595548
- CO2 Emissions (in grams): 183.88911013564527
## Validation Metrics
- Loss: 0.3050823509693146
- MSE: 0.3050823509693146
- MAE: 0.2664000689983368
- R2: 0.844059188176304
- RMSE: 0.5523425936698914
- Explained Variance: 0.8472161293029785
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595548
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595548", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595548", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,411 | mujeensung/albert-base-v2_mnli_bc | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-base-v2_mnli_bc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9398776667163956
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2_mnli_bc
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2952
- Accuracy: 0.9399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2159 | 1.0 | 16363 | 0.2268 | 0.9248 |
| 0.1817 | 2.0 | 32726 | 0.2335 | 0.9347 |
| 0.0863 | 3.0 | 49089 | 0.3014 | 0.9401 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
1,412 | mujeensung/roberta-base_mnli_bc | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-base_mnli_bc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9583768461882739
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_mnli_bc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2125
- Accuracy: 0.9584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2015 | 1.0 | 16363 | 0.1820 | 0.9470 |
| 0.1463 | 2.0 | 32726 | 0.1909 | 0.9559 |
| 0.0768 | 3.0 | 49089 | 0.2117 | 0.9585 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.