modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry | [
"البسيط",
"الخفيف",
"الدوبيت",
"الرجز",
"الرمل",
"السريع",
"السلسلة",
"الطويل",
"الكامل",
"المتدارك",
"المتقارب",
"المجتث",
"المديد",
"المضارع",
"المقتضب",
"المنسرح",
"المواليا",
"الهزج",
"الوافر",
"شعر التفعيلة",
"شعر حر",
"عامي",
"موشح"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-MSA Poetry Classification Model
## Model description
**CAMeLBERT-MSA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9914996027946472},
{'label': 'الكامل', 'score': 0.917242169380188}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | 3,393 |
Cameron/BERT-Jigsaw | null | Entry not found | 15 |
JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish | [
"Not_bullying",
"Bullying"
] | ---
language: es
tags:
- "spanish"
metrics:
- accuracy
widget:
- text: "Eres mas pequeño que un pitufo!"
- text: "Eres muy feo!"
- text: "Odio tu forma de hablar!"
- text: "Eres tan fea que cuando eras pequeña te echaban de comer por debajo de la puerta."
---
# roberta-base-bne-finetuned-ciberbullying-spanish
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect ciberbullying on Spanish.
It achieves the following results on the evaluation set:
- Loss: 0.1657
- Accuracy: 0.9607
## Training and evaluation data
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 360k sentences.
## Training procedure
<details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.1512 | 1.0 | 22227 | 0.9501 | 0.1418 |
| 0.1253 | 2.0 | 44454 | 0.9567 | 0.1499 |
| 0.0973 | 3.0 | 66681 | 0.9594 | 0.1397 |
| 0.0658 | 4.0 | 88908 | 0.9607 | 0.1657 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-bne-finetuned-ciberbullying-spanish"
bullying_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
bullying_analysis(
"Desde que te vi me enamoré de ti."
)
# Output:
[{'label': 'Not_bullying', 'score': 0.9995710253715515}]
bullying_analysis(
"Eres tan fea que cuando eras pequeña te echaban de comer por debajo de la puerta."
)
# Output:
[{'label': 'Bullying', 'score': 0.9918262958526611}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Cyberbullying_detection_(SPANISH).ipynb)
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) | 2,686 |
Nenma/romanian-bert-fake-news | null | Entry not found | 15 |
TransQuest/monotransquest-da-ru_en-reddit_wikiquotes | [
"LABEL_0"
] | ---
language: ru-en
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-ru_en-reddit_wikiquotes", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 5,414 |
benjaminbeilharz/bert-base-uncased-dailydialog-turn-classifier | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
boronbrown48/wangchanberta-topic-classification | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
boychaboy/MNLI_bert-base-uncased_2 | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
cardiffnlp/bertweet-base-stance-abortion | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | 0 | |
pertschuk/albert-large-intent-v3 | null | Entry not found | 15 |
sagteam/pharm-relation-extraction | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7"
] | pharm-relation-extraction
===
Model trained to recognize 4 types of relationships between significant pharmacological entities in russian-language reviews: ADR–Drugname, Drugname–Diseasename, Drugname–SourceInfoDrug, Diseasename–Indication. The input of the model is a review text and a pair of entities, between which it is required to determine the fact of a relationship and one of the 4 types of relationship, listed above.
Data
----
Proposed model is trained on a subset of 908 reviews of the [Russian Drug Review Corpus (RDRS)](https://arxiv.org/pdf/2105.00059.pdf). The subset contains the pairs of entities marked with the 4 listed types of relationships:
- ADR-Drugname — the relationship between the drug and its side effects
- Drugname-SourceInfodrug — the relationship between the medication and the source of information about it (e.g., “was advised at the pharmacy”, e.g., “was advised at the pharmacy”, “the doctor recommended it”);
- Drugname-Diseasname — the relationship between the drug and the disease
- Diseasename-Indication — the connection between the illness and its symptoms (e.g., “cough”, “fever 39 degrees”)
Also, this subset contains pairs of the same entity types between which there is no relationship: for example, a drug and an unrelated side effect that appeared after taking another drug; in other words, this side effect is related to another drug.
Model topology and training
----
Proposed model is based on the [XLM-RoBERTA-large](https://arxiv.org/abs/1911.02116) topology. After the additional training as a language model on corpus of unmarked drug reviews, this model was trained as a classification model on 80% of the texts from subset of the corps described above.
How to use
----
See section "How to use" in [our git repository for the model](https://github.com/sag111/Relation_Extraction)
Results
----
Here are the accuracy, estimated by the f1 score metric for the recognition of relationships on the best fold.
| ADR–Drugname | Drugname–Diseasename | Drugname–SourceInfoDrug | Diseasename–Indication |
| ------------- | -------------------- | ----------------------- | ---------------------- |
| 0.955 | 0.892 | 0.922 | 0.891 |
Citation info
----
If you have found our results helpful in your work, feel free to cite our publication as:
```
@article{sboev2021extraction,
title={Extraction of the Relations between Significant Pharmacological Entities in Russian-Language Internet Reviews on Medications},
author={Sboev, Alexander and Selivanov, Anton and Moloshnikov, Ivan and Rybka, Roman and Gryaznov, Artem and Sboeva, Sanna and Rylkov, Gleb},
year={2021},
publisher={Preprints}
}
``` | 2,714 |
wrmurray/roberta-base-finetuned-imdb | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9552
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-imdb
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1783
- Accuracy: 0.9552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1904 | 1.0 | 1563 | 0.1423 | 0.9517 |
| 0.1187 | 2.0 | 3126 | 0.1783 | 0.9552 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,625 |
tae898/emoberta-large | [
"anger",
"disgust",
"fear",
"joy",
"neutral",
"sadness",
"surprise"
] | ---
language: en
tags:
- emoberta
- roberta
license: mit
datasets:
- MELD
- IEMOCAP
---
Check https://github.com/tae898/erc for the details
[Watch a demo video!](https://youtu.be/qbr7fNd6J28)
# Emotion Recognition in Coversation (ERC)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in)
At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)
## Prerequisites
1. An x86-64 Unix or Unix-like machine
1. Python 3.8 or higher
1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python.
1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule)
1. pip install -r requirements.txt
## EmoBERTa training
First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then,
In this directory run the below commands. I recommend you to run this in a virtualenv.
```sh
python train-erc-text.py
```
This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`.
## Results on the test split (weighted f1 scores)
| Model | | MELD | IEMOCAP |
| -------- | ------------------------------- | :-------: | :-------: |
| EmoBERTa | No past and future utterances | 63.46 | 56.09 |
| | Only past utterances | 64.55 | **68.57** |
| | Only future utterances | 64.23 | 66.56 |
| | Both past and future utterances | **65.61** | 67.42 |
| | → *without speaker names* | 65.07 | 64.02 |
Above numbers are the mean values of five random seed runs.
If you want to see more training test details, check out `./results/`
If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file.
## Deployment
### Huggingface
We have released our models on huggingface:
- [emoberta-base](https://huggingface.co/tae898/emoberta-base)
- [emoberta-large](https://huggingface.co/tae898/emoberta-large)
They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you").
### Flask app
You can either run the Flask RESTful server app as a docker container or just as a python script.
1. Running the app as a docker container **(recommended)**.
There are four images. Take what you need:
- `docker run -it --rm -p 10006:10006 tae898/emoberta-base`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda`
- `docker run -it --rm -p 10006:10006 tae898/emoberta-large`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda`
1. Running the app in your python environment:
This method is less recommended than the docker one.
Run `pip install -r requirements-deploy.txt` first.<br>
The [`app.py`](app.py) is a flask RESTful server. The usage is below:
```console
app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE]
```
For example:
```sh
python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base
```
### Client
Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below:
```console
client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT
```
For example:
```sh
python client.py --text "Emotion recognition is so cool\!"
```
will give you:
```json
{
"neutral": 0.0049800905,
"joy": 0.96399665,
"surprise": 0.018937444,
"anger": 0.0071516023,
"sadness": 0.002021492,
"disgust": 0.001495996,
"fear": 0.0014167271
}
```
## Troubleshooting
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
1. Run `make style && quality` in the root repo directory, to ensure code quality.
1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
1. Push to the Branch (`git push origin feature/AmazingFeature`)
1. Open a Pull Request
## Cite our work
Check out the [paper](https://arxiv.org/abs/2108.12009).
```bibtex
@misc{kim2021emoberta,
title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa},
author={Taewoon Kim and Piek Vossen},
year={2021},
eprint={2108.12009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[](https://zenodo.org/badge/latestdoi/328375452)<br>
## Authors
- [Taewoon Kim](https://taewoonkim.com/)
## License
[MIT](https://choosealicense.com/licenses/mit/)
| 6,025 |
Intel/roberta-base-mrpc-int8-static | [
"0",
"1"
] | ---
language:
- en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- glue
metrics:
- f1
model-index:
- name: roberta-base-mrpc-int8-static
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.924693520140105
---
# INT8 roberta-base-mrpc
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [roberta-base-mrpc](https://huggingface.co/Intel/roberta-base-mrpc).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304.
The embedding module **roberta.embeddings.token_type_embeddings** falls back to fp32 due to *RuntimeError('Expect weight, indices, and offsets to be contiguous.')*
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9247|0.9138|
| **Model size (MB)** |121|476|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/roberta-base-mrpc-int8-static',
)
```
| 1,414 |
Hate-speech-CNERG/urdu-abusive-MuRIL | null | ---
language: ur
license: afl-3.0
---
This model is used to detect **abusive speech** in **Urdu**. It is finetuned on MuRIL model using Urdu abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ | 955 |
HiTZ/A2T_RoBERTa_SMFA_ACE-arg | [
"contradiction",
"entailment",
"neutral"
] | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` | 3,612 |
Adapting/comfort_congratulations_neutral-classifier | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] |
# Adapting/comfort_congratulations_neutral-classifier
code used to train this model: https://colab.research.google.com/drive/1BHc8UMuT0sRyA_M24Acits5oHwUmjsFm?usp=sharing
dataset: https://huggingface.co/datasets/Adapting/empathetic_dialogues_v2
LABEL_0: neutral
LABEL_1: congratulating
LABEL_2: comforting | 311 |
dwing/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9335
- name: F1
type: f1
value: 0.9336729469235073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1616
- Accuracy: 0.9335
- F1: 0.9337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1003 | 1.0 | 250 | 0.1854 | 0.931 | 0.9311 |
| 0.0891 | 2.0 | 500 | 0.1616 | 0.9335 | 0.9337 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,801 |
ArnavL/roberta-base-agnews-0 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
aatmasidha/newsmodelclassification | [
"Sadness",
"Joy",
"Love",
"Anger",
"Fear",
"Surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: newsmodelclassification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9271124951673986
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newsmodelclassification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2065
- Accuracy: 0.927
- F1: 0.9271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8011 | 1.0 | 250 | 0.2902 | 0.911 | 0.9090 |
| 0.2316 | 2.0 | 500 | 0.2065 | 0.927 | 0.9271 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
| 1,768 |
zhernosek12/classif_sasha | [
"cmr",
"inoe",
"rgd",
"schet",
"schet-faktura",
"tovarnaya-nakladnaya"
] | Entry not found | 15 |
PGT/graphnystromformer-s-artificial-balanced-max500-490000-0 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | Entry not found | 15 |
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | [
"البسيط",
"الخفيف",
"الدوبيت",
"الرجز",
"الرمل",
"السريع",
"السلسلة",
"الطويل",
"الكامل",
"المتدارك",
"المتقارب",
"المجتث",
"المديد",
"المضارع",
"المقتضب",
"المنسرح",
"المواليا",
"الهزج",
"الوافر",
"شعر التفعيلة",
"شعر حر",
"عامي",
"موشح"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-CA Poetry Classification Model
## Model description
**CAMeLBERT-CA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9845284819602966},
{'label': 'الكامل', 'score': 0.752918004989624}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | 3,380 |
Capreolus/electra-base-msmarco | null | # capreolus/electra-base-msmarco
## Model description
ELECTRA-Base model (`google/electra-base-discriminator`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model, but requires some modification since it contains a BERT classification head rather than the standard ELECTRA classification head. See the [TFElectraRelevanceHead](https://github.com/capreolus-ir/capreolus/blob/master/capreolus/reranker/TFBERTMaxP.py) in the Capreolus BERT-MaxP implementation for a usage example.
This corresponds to the ELECTRA-Base model used to initialize PARADE (ELECTRA) in [PARADE: Passage Representation Aggregation for Document Reranking](https://arxiv.org/abs/2008.09093) by Li et al. It was converted from the released [TFv1 checkpoint](https://zenodo.org/record/3974431/files/vanilla_electra_base_on_MSMARCO.tar.gz). Please cite the PARADE paper if you use these weights.
| 935 |
Intel/bert-base-uncased-mnli-sparse-70-unstructured | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: en
---
# Sparse BERT base model fine tuned to MNLI (uncased)
Fine tuned sparse BERT base to MNLI (GLUE Benchmark) task from [bert-base-uncased-sparse-70-unstructured](https://huggingface.co/Intel/bert-base-uncased-sparse-70-unstructured).
<br><br>
Note: This model requires `transformers==2.10.0`
## Evaluation Results
Matched: 82.5%
Mismatched: 83.3%
This model can be further fine-tuned to other tasks and achieve the following evaluation results:
| Task | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) | STS-B (Pears/Spear) | SQuADv1.1 (Acc/F1) |
|------|--------------|------------|-------------|---------------------|--------------------|
| | 90.2/86.7 | 90.3 | 91.5 | 88.9/88.6 | 80.5/88.2 |
| 759 |
adelevie/distilbert-gsa-eula-opp | null | Entry not found | 15 |
akdeniz27/bert-turkish-text-classification | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8"
] | ---
language: tr
---
# Turkish Text Classification for Complaints Data Set
This model is a fine-tune model of https://github.com/stefan-it/turkish-bert by using text classification data with 9 categories as follows:
id_to_category = {0: 'KONFORSUZLUK', 1: 'TARİFE İHLALİ', 2: 'DURAKTA DURMAMA', 3: 'ŞOFÖR-PERSONEL ŞİKAYETİ',
4: 'YENİ GÜZERGAH/HAT/DURAK İSTEĞİ', 5: 'TRAFİK GÜVENLİĞİ', 6: 'DİĞER ŞİKAYETLER', 7: 'TEŞEKKÜR', 8: 'DİĞER TALEPLER'}
| 467 |
arianpasquali/distilbert-base-multilingual-cased-toxicity | [
"not_toxic",
"toxic"
] | Entry not found | 15 |
recobo/chemical-bert-uncased-pharmaceutical-chemical-classifier | null | ---
language: "en"
tags:
- buy-intent
- sell-intent
- consumer-intent
widget:
- text: "Flutoprazepam (Restas) is a drug which is a benzodiazepine. It was patented in Japan by Sumitomo."
---
# Chemical vs Pharmaceutical Domain Document Classifier
Chemical domain language model finetuned on 13K Chemical, and 14K Pharma Wikipedia articles broken into paragraphs.
| Train Loss | Validation Acc. | Test Acc.|
| ------------- |:-------------: | -----: |
| 0.17 | 0.928 | 0.927 |
# Dataset
Dataset with splits can be found @ [https://www.kaggle.com/shahrukhkhan/pharma-vs-chemicals-domain-classification](https://www.kaggle.com/shahrukhkhan/pharma-vs-chemicals-domain-classification)
# Label Mappings
LABEL_0 => **"PHARMACEUTICAL"** <br/>
LABEL_1 => **"CHEMICAL"**
## Usage in Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("recobo/chemical-bert-uncased-pharmaceutical-chemical-classifier")
model = AutoModelForSequenceClassification.from_pretrained("recobo/chemical-bert-uncased-pharmaceutical-chemical-classifier")
``` | 1,130 |
ShihTing/HealthBureauSix | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
tags: autonlp
language: unk
widget:
- text: "民眾來電反映:事由:護士態度惡劣,對病人大吼大叫,對於態度惡劣的人卻於與錄用,敬請相關單位改善"
- text: "民眾來電:
時間:2016年3月24號至2019年10月26號
地點:三軍總醫院 北投分院
事由:民眾表揚上述地點及時間有些醫護人員很優秀、親切、具有專業服務水準、好相處(2病房的護理師陳怡鎮、歐素玲、陳芊糖,7病房蔡閔儒,12病房林哲玄、黃仙怡,主治醫師楊蕙年)
訴求:敬請相關單位給予表揚與肯定
"
- text: "本人之先生2-3年前接受吳醫師植牙治療,本人之先生已付完植牙醫療費用,但吳醫師尚未完成本人先生之植牙,診所即關閉,導致本人先生植牙之牙體未鎖緊且不斷發炎、無法咀嚼,精神跟身體上都受到傷害,去別家牙醫診所看診也沒有醫師願意處理。後本人發現吳醫師有在XX牙醫診所(台北市)看診,本人之先生去該診所再請吳醫師協助處理原本植牙方面問題,但診所跟本人先生收取3萬5的材料費,本人認為不合理,本人已付完當初植牙費用,且是吳醫師當初未處理好,應該全權負責,現在再收取醫療費用,實在不合理。"
---
衛生局文本分類->六元
Data random_state=43
| 547 |
spartan97/distilbert-base-uncased-finetuned-objectivity-rotten | [
"NEGATIVE",
"POSITIVE"
] | ---
license: gpl-3.0
---
Objectivity sentence classification model based on **distilbert-base-uncased-finetuned-sst-2-english**. It was fine-tuned with Rotten-IMDB movie review [data](http://www.cs.cornell.edu/people/pabo/movie-review-data/) using extracted sentences from film plots as objective examples and review comments as subjective language examples.
With a test set of 5%, we obtained an accuracy of 96% and f1 of the same value.
Please, feel free to try the demo online with subjective language examples like "I think...", "I believe...", and more objective claims.
For any further comments contact me, at marcosfernandez.pichel@usc.es.
| 652 |
tsdocode/phobert-finetune-hatespeech | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- vi
tags:
- classification
widget:
- text: "Xấu vcl"
example_title: "Công kích"
- text: "Đồ ngu"
example_title: "Thù ghét"
- text: "Xin chào chúc một ngày tốt lành"
example_title: "Normal"
---
## [PhoBert](https://huggingface.co/vinai/phobert-base/tree/main) finetuned version for hate speech detection
## Dataset
- [**VLSP2019**](https://github.com/sonlam1102/vihsd): Hate Speech Detection on Social Networks Dataset
- [**ViHSD**](https://vlsp.org.vn/vlsp2019/eval/hsd): Vietnamese Hate Speech Detection dataset
## Class name
- LABEL_0 : **Normal**
- LABEL_1 : **OFFENSIVE**
- LABEL_2 : **HATE**
## Usage example with **TextClassificationPipeline**
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline
model = AutoModelForSequenceClassification.from_pretrained("tsdocode/phobert-finetune-hatespeech", num_labels=3)
tokenizer = AutoTokenizer.from_pretrained("tsdocode/phobert-finetune-hatespeech")
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
# outputs a list of dicts like [[{'label': 'NEGATIVE', 'score': 0.0001223755971295759}, {'label': 'POSITIVE', 'score': 0.9998776316642761}]]
pipe("đồ ngu")
``` | 1,242 |
waboucay/camembert-large-finetuned-xnli_fr_3_classes-finetuned-repnum_wl_3_classes | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 78.3 | 78.3 |
| test | 79.5 | 79.4 | | 367 |
AI-Prize-Challenges/autotrain-finetuned1-1035435583 | [
"negative",
"positive"
] | ---
tags: autotrain
language: zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- AI-Prize-Challenges/autotrain-data-finetuned1
co2_eq_emissions: 0.03608660562919794
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1035435583
- CO2 Emissions (in grams): 0.03608660562919794
## Validation Metrics
- Loss: 0.31551286578178406
- Accuracy: 0.8816629547141797
- Precision: 0.8965702036441586
- Recall: 0.8906042054830983
- AUC: 0.9449180200540812
- F1: 0.8935772466283884
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/AI-Prize-Challenges/autotrain-finetuned1-1035435583
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("AI-Prize-Challenges/autotrain-finetuned1-1035435583", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("AI-Prize-Challenges/autotrain-finetuned1-1035435583", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,232 |
Eleven/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2263
- Accuracy: 0.9225
- F1: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8571 | 1.0 | 250 | 0.3333 | 0.902 | 0.8982 |
| 0.2507 | 2.0 | 500 | 0.2263 | 0.9225 | 0.9221 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
| 1,487 |
Jimchoo91/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9231998923975969
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.923
- F1: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8243 | 1.0 | 250 | 0.3183 | 0.906 | 0.9019 |
| 0.2543 | 2.0 | 500 | 0.2251 | 0.923 | 0.9232 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.7.1
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,797 |
IlyaGusev/xlm_roberta_large_headline_cause_simple | [
"not_cause",
"left_right",
"right_left"
] | ---
language:
- ru
- en
tags:
- xlm-roberta-large
datasets:
- IlyaGusev/headline_cause
license: apache-2.0
widget:
- text: "Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку"
---
# XLM-RoBERTa HeadlineCause Simple
## Model description
This model was trained to predict the presence of causal relations between two headlines. This model is for the Simple task with 3 possible labels: A causes B, B causes A, no causal relation. English and Russian languages are supported.
You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with ```</s>``` token.
For example:
```
Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку
```
## Intended uses & limitations
#### How to use
```python
from tqdm.notebook import tqdm
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
def get_batch(data, batch_size):
start_index = 0
while start_index < len(data):
end_index = start_index + batch_size
batch = data[start_index:end_index]
yield batch
start_index = end_index
def pipe_predict(data, pipe, batch_size=64):
raw_preds = []
for batch in tqdm(get_batch(data, batch_size)):
raw_preds += pipe(batch)
return raw_preds
MODEL_NAME = TOKENIZER_NAME = "IlyaGusev/xlm_roberta_large_headline_cause_simple"
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME, do_lower_case=False)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
model.eval()
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, framework="pt", return_all_scores=True)
texts = [
(
"Judge issues order to allow indoor worship in NC churches",
"Some local churches resume indoor services after judge lifted NC governor’s restriction"
),
(
"Gov. Kevin Stitt defends $2 million purchase of malaria drug touted by Trump",
"Oklahoma spent $2 million on malaria drug touted by Trump"
),
(
"Песков опроверг свой перевод на удаленку",
"Дмитрий Песков перешел на удаленку"
)
]
pipe_predict(texts, pipe)
```
#### Limitations and bias
The models are intended to be used on news headlines. No other limitations are known.
## Training data
* HuggingFace dataset: [IlyaGusev/headline_cause](https://huggingface.co/datasets/IlyaGusev/headline_cause)
* GitHub: [IlyaGusev/HeadlineCause](https://github.com/IlyaGusev/HeadlineCause)
## Training procedure
* Notebook: [HeadlineCause](https://colab.research.google.com/drive/1NAnD0OJ0TnYCJRsHpYUyYkjr_yi8ObcA)
* Stand-alone script: [train.py](https://github.com/IlyaGusev/HeadlineCause/blob/main/headline_cause/train.py)
## Eval results
Evaluation results can be found in the [arxiv paper](https://arxiv.org/pdf/2108.12626.pdf).
### BibTeX entry and citation info
```bibtex
@misc{gusev2021headlinecause,
title={HeadlineCause: A Dataset of News Headlines for Detecting Causalities},
author={Ilya Gusev and Alexey Tikhonov},
year={2021},
eprint={2108.12626},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 3,163 |
Narsil/tiny-distilbert-sequence-classification | null | Entry not found | 15 |
boychaboy/MNLI_bert-base-uncased | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
boychaboy/MNLI_roberta-base | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
crazould/multimodal-emotion-recognition | [
"anger",
"disgust",
"fear",
"joy",
"neutral",
"sadness",
"surprise"
] | Entry not found | 15 |
erst/xlm-roberta-base-finetuned-db07 | [
"011100",
"011200",
"011300",
"011400",
"011500",
"011600",
"011900",
"012100",
"012200",
"012300",
"012400",
"012500",
"012600",
"012700",
"012800",
"012900",
"013000",
"014100",
"014200",
"014300",
"014400",
"014500",
"014610",
"014620",
"014700",
"014910",
"014920",
"015000",
"016100",
"016200",
"016300",
"016400",
"017000",
"021000",
"022000",
"023000",
"024000",
"031100",
"031200",
"032100",
"032200",
"051000",
"052000",
"061000",
"062000",
"071000",
"072100",
"072900",
"081100",
"081200",
"089100",
"089200",
"089300",
"089900",
"091000",
"099000",
"101110",
"101190",
"101200",
"101300",
"102010",
"102020",
"103100",
"103200",
"103900",
"104100",
"104200",
"105100",
"105200",
"106100",
"106200",
"107110",
"107120",
"107200",
"107300",
"108100",
"108200",
"108300",
"108400",
"108500",
"108600",
"108900",
"109100",
"109200",
"110100",
"110200",
"110300",
"110400",
"110500",
"110600",
"110700",
"120000",
"131000",
"132000",
"133000",
"139100",
"139210",
"139220",
"139300",
"139400",
"139500",
"139600",
"139900",
"141100",
"141200",
"141300",
"141400",
"141900",
"142000",
"143100",
"143900",
"151100",
"151200",
"152000",
"161000",
"162100",
"162200",
"162300",
"162400",
"162900",
"171100",
"171200",
"172100",
"172200",
"172300",
"172400",
"172900",
"181100",
"181200",
"181300",
"181400",
"182000",
"191000",
"192000",
"201100",
"201200",
"201300",
"201400",
"201500",
"201600",
"201700",
"202000",
"203000",
"204100",
"204200",
"205100",
"205200",
"205300",
"205900",
"206000",
"211000",
"212000",
"221100",
"221900",
"222100",
"222200",
"222300",
"222900",
"231100",
"231200",
"231300",
"231400",
"231900",
"232000",
"233100",
"233200",
"234100",
"234200",
"234300",
"234400",
"234900",
"235100",
"235200",
"236100",
"236200",
"236300",
"236400",
"236500",
"236900",
"237000",
"239100",
"239910",
"239990",
"241000",
"242000",
"243100",
"243200",
"243300",
"243400",
"244100",
"244200",
"244300",
"244400",
"244500",
"244600",
"245100",
"245200",
"245300",
"245400",
"251100",
"251200",
"252100",
"252900",
"253000",
"254000",
"255000",
"256100",
"256200",
"257100",
"257200",
"257300",
"259100",
"259200",
"259300",
"259400",
"259900",
"261100",
"261200",
"262000",
"263000",
"264000",
"265100",
"265200",
"266010",
"266090",
"267000",
"268000",
"271100",
"271200",
"272000",
"273100",
"273200",
"273300",
"274000",
"275100",
"275200",
"279000",
"281110",
"281190",
"281200",
"281300",
"281400",
"281500",
"282100",
"282200",
"282300",
"282400",
"282500",
"282900",
"283000",
"284100",
"284900",
"289100",
"289200",
"289300",
"289400",
"289500",
"289600",
"289900",
"291000",
"292000",
"293100",
"293200",
"301100",
"301200",
"302000",
"303000",
"304000",
"309100",
"309200",
"309900",
"310100",
"310200",
"310300",
"310900",
"321100",
"321200",
"321300",
"322000",
"323000",
"324000",
"325000",
"329100",
"329900",
"331100",
"331200",
"331300",
"331400",
"331500",
"331600",
"331700",
"331900",
"332000",
"351100",
"351200",
"351300",
"351400",
"352100",
"352200",
"352300",
"353000",
"360000",
"370000",
"381100",
"381200",
"382110",
"382120",
"382200",
"383100",
"383200",
"390000",
"411000",
"412000",
"421100",
"421200",
"421300",
"422100",
"422200",
"429100",
"429900",
"431100",
"431200",
"431300",
"432100",
"432200",
"432900",
"433100",
"433200",
"433300",
"433410",
"433420",
"433900",
"439100",
"439910",
"439990",
"451110",
"451120",
"451910",
"451920",
"452010",
"452020",
"452030",
"452040",
"453100",
"453200",
"454000",
"461100",
"461200",
"461300",
"461400",
"461500",
"461600",
"461710",
"461790",
"461800",
"461900",
"462100",
"462200",
"462300",
"462400",
"463100",
"463200",
"463300",
"463410",
"463420",
"463500",
"463600",
"463700",
"463810",
"463890",
"463900",
"464100",
"464210",
"464220",
"464310",
"464320",
"464330",
"464340",
"464350",
"464410",
"464420",
"464500",
"464610",
"464620",
"464700",
"464800",
"464910",
"464920",
"464930",
"464990",
"465100",
"465210",
"465220",
"466100",
"466200",
"466300",
"466400",
"466500",
"466600",
"466900",
"467100",
"467200",
"467310",
"467320",
"467400",
"467500",
"467600",
"467700",
"469000",
"471110",
"471120",
"471130",
"471900",
"472100",
"472200",
"472300",
"472400",
"472500",
"472600",
"472900",
"473000",
"474100",
"474200",
"474300",
"475100",
"475210",
"475220",
"475300",
"475400",
"475910",
"475920",
"475930",
"475940",
"475990",
"476100",
"476200",
"476300",
"476410",
"476420",
"476430",
"476500",
"477110",
"477120",
"477210",
"477220",
"477300",
"477400",
"477500",
"477610",
"477620",
"477630",
"477700",
"477810",
"477820",
"477830",
"477840",
"477890",
"477900",
"478100",
"478200",
"478900",
"479111",
"479112",
"479113",
"479114",
"479115",
"479116",
"479117",
"479119",
"479120",
"479900",
"491000",
"492000",
"493110",
"493120",
"493200",
"493910",
"493920",
"494100",
"494200",
"495000",
"501000",
"502000",
"503000",
"504000",
"511010",
"511020",
"512100",
"512200",
"521000",
"522110",
"522120",
"522130",
"522210",
"522220",
"522300",
"522400",
"522910",
"522920",
"522990",
"531000",
"532000",
"551010",
"551020",
"552000",
"553000",
"559000",
"561010",
"561020",
"562100",
"562900",
"563000",
"581100",
"581200",
"581300",
"581410",
"581420",
"581900",
"582100",
"582900",
"591110",
"591120",
"591200",
"591300",
"591400",
"592000",
"601000",
"602000",
"611000",
"612000",
"613000",
"619000",
"620100",
"620200",
"620300",
"620900",
"631100",
"631200",
"639100",
"639900",
"641100",
"641900",
"642010",
"642020",
"642030",
"643010",
"643020",
"643030",
"643040",
"649100",
"649210",
"649220",
"649230",
"649240",
"649900",
"651100",
"651200",
"652000",
"653010",
"653020",
"661100",
"661200",
"661900",
"662100",
"662200",
"662900",
"663000",
"681000",
"682010",
"682020",
"682030",
"682040",
"683110",
"683120",
"683210",
"683220",
"691000",
"692000",
"701010",
"701020",
"702100",
"702200",
"711100",
"711210",
"711220",
"711230",
"711240",
"711290",
"712010",
"712020",
"712090",
"721100",
"721900",
"722000",
"731110",
"731190",
"731200",
"732000",
"741010",
"741020",
"741030",
"742000",
"743000",
"749010",
"749090",
"750000",
"771100",
"771200",
"772100",
"772200",
"772900",
"773100",
"773200",
"773300",
"773400",
"773500",
"773900",
"774000",
"781000",
"782000",
"783000",
"791100",
"791200",
"799000",
"801000",
"802000",
"803000",
"811000",
"812100",
"812210",
"812220",
"812290",
"812900",
"813000",
"821100",
"821900",
"822000",
"823000",
"829100",
"829200",
"829900",
"841100",
"841200",
"841300",
"842100",
"842200",
"842300",
"842400",
"842500",
"843000",
"851000",
"852010",
"852020",
"853110",
"853120",
"853200",
"854100",
"854200",
"855100",
"855200",
"855300",
"855900",
"856000",
"861000",
"862100",
"862200",
"862300",
"869010",
"869020",
"869030",
"869040",
"869090",
"871010",
"871020",
"872010",
"872020",
"873010",
"873020",
"879010",
"879020",
"879090",
"881010",
"881020",
"881030",
"889110",
"889120",
"889130",
"889140",
"889150",
"889160",
"889910",
"889920",
"889990",
"900110",
"900120",
"900200",
"900300",
"900400",
"910110",
"910120",
"910200",
"910300",
"910400",
"920000",
"931100",
"931200",
"931300",
"931900",
"932100",
"932910",
"932990",
"941100",
"941200",
"942000",
"949100",
"949200",
"949900",
"951100",
"951200",
"952100",
"952200",
"952300",
"952400",
"952500",
"952900",
"960110",
"960120",
"960210",
"960220",
"960300",
"960400",
"960900",
"970000",
"981000",
"982000",
"990000"
] | # Classifying Text into DB07 Codes
This model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned to classify Danish descriptions of activities into [Dansk Branchekode DB07](https://www.dst.dk/en/Statistik/dokumentation/nomenklaturer/dansk-branchekode-db07) codes.
## Data
Approximately 2.5 million business names and descriptions of activities from Norwegian and Danish businesses were used to fine-tune the model. The Norwegian descriptions were translated into Danish and the Norwegian SN 2007 codes were translated into Danish DB07 codes.
Activity descriptions and business names were concatenated but separated by the separator token `</s>`. Thus, the model was trained on input texts in the format `f"{description_of_activity}</s>{business_name}"`.
## Quick Start
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("erst/xlm-roberta-base-finetuned-db07")
model = AutoModelForSequenceClassification.from_pretrained("erst/xlm-roberta-base-finetuned-db07")
pl = pipeline(
"sentiment-analysis",
model=model,
tokenizer=tokenizer,
return_all_scores=False,
)
pl("Vi sælger sko")
pl("We sell clothes</s>Clothing ApS")
```
| 1,254 |
federicopascual/finetuning-sentiment-model-3000-samples-testcopy | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples-testcopy
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8761904761904761
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-testcopy
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3374
- Accuracy: 0.87
- F1: 0.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,524 |
nickmuchi/distilroberta-finetuned-financial-text-classification | [
"bearish",
"neutral",
"bullish"
] | ---
license: apache-2.0
language: "en"
tags:
- financial-sentiment-analysis
- sentiment-analysis
- sentence_50agree
- generated_from_trainer
- financial
- stocks
- sentiment
datasets:
- financial_phrasebank
- Kaggle Self label
- nickmuchi/financial-classification
metrics:
- f1
widget:
- text: "The USD rallied by 10% last night"
example_title: "Bullish Sentiment"
- text: "Covid-19 cases have been increasing over the past few months impacting earnings for global firms"
example_title: "Bearish Sentiment"
- text: "the USD has been trending lower"
example_title: "Mildly Bearish Sentiment"
model-index:
- name: distilroberta-finetuned-finclass
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: finance
args: sentence_50agree
metrics:
- type: F1
name: F1
value: 0.8835
- type: accuracy
name: accuracy
value: 0.89
---
# distilroberta-finetuned-financial-text-classification
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the sentence_50Agree [financial-phrasebank + Kaggle Dataset](https://huggingface.co/datasets/nickmuchi/financial-classification), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive). The Kaggle dataset includes Covid-19 sentiment data and can be found here: [sentiment-classification-selflabel-dataset](https://www.kaggle.com/percyzheng/sentiment-classification-selflabel-dataset).
It achieves the following results on the evaluation set:
- Loss: 0.4463
- F1: 0.8835
## Model description
Model determines the financial sentiment of given text. Given the unbalanced distribution of the class labels, the weights were adjusted to pay attention to the less sampled labels which should increase overall performance. The Covid dataset was added in order to enrich the model, given most models have not been trained on the impact of Covid-19 on earnings or markets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7309 | 1.0 | 72 | 0.3671 | 0.8441 |
| 0.3757 | 2.0 | 144 | 0.3199 | 0.8709 |
| 0.3054 | 3.0 | 216 | 0.3096 | 0.8678 |
| 0.2229 | 4.0 | 288 | 0.3776 | 0.8390 |
| 0.1744 | 5.0 | 360 | 0.3678 | 0.8723 |
| 0.1436 | 6.0 | 432 | 0.3728 | 0.8758 |
| 0.1044 | 7.0 | 504 | 0.4116 | 0.8744 |
| 0.0931 | 8.0 | 576 | 0.4148 | 0.8761 |
| 0.0683 | 9.0 | 648 | 0.4423 | 0.8837 |
| 0.0611 | 10.0 | 720 | 0.4463 | 0.8835 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| 3,192 |
persiannlp/mbert-base-parsinlu-multiple-choice | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mbert
- persian
- farsi
pipeline_tag: text-classification
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mbert-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from typing import List
import torch
from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer
model_name = "persiannlp/mbert-base-parsinlu-multiple-choice"
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config)
def run_model(question: str, candicates: List[str]):
assert len(candicates) == 4, "you need four candidates"
choices_inputs = []
for c in candicates:
text_a = "" # empty context
text_b = question + " " + c
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=128,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
choices_inputs.append(inputs)
input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs])
output = model(input_ids=input_ids)
print(output)
return output
run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"])
run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"])
run_model(
question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ",
candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"])
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
| 2,045 |
textattack/albert-base-v2-RTE | null | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.776173285198556, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 619 |
l3cube-pune/mahahate-multi-roberta | [
"Hate",
"Offensive",
"Profane",
"None"
] | ---
language: mr
tags:
license: cc-by-4.0
datasets:
- L3Cube-MahaHate
widget:
- text: "I like you. </s></s> I love you."
---
## MahaHate-multi-RoBERTa
MahaHate-multi-RoBERTa (Marathi Hate speech identification) is a MahaRoBERTa(l3cube-pune/marathi-roberta) model fine-tuned on L3Cube-MahaHate - a Marathi tweet-based hate speech detection dataset. This is a four-class model with labels as hate, offensive, profane, and not. The 2-class model can be found <a href='https://huggingface.co/l3cube-pune/mahahate-bert'> here </a>
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2203.13778)
| 712 |
frasermince/longformer-fake-news | null | Entry not found | 15 |
AbhiNaiky/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3170
- Accuracy: 0.8733
- F1: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,507 |
HiTZ/A2T_RoBERTa_SMFA_TACRED-re | [
"contradiction",
"entailment",
"neutral"
] | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` | 3,612 |
CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_42 | [
"0",
"1"
] | Entry not found | 15 |
emre/turkish-sentiment-analysis | [
"Negative",
"Notr",
"Positive"
] | ---
tags: autotrain
language: tr
widget:
- text: "Bu ürün gerçekten güzel çıktı"
datasets:
- emre/autotrain-data-turkish-sentiment-analysis
co2_eq_emissions: 120.82460124309924
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 870727732
- CO2 Emissions (in grams): 120.82460124309924
## Validation Metrics
- Loss: 0.1098366305232048
- Accuracy: 0.9697853317600073
- Macro F1: 0.9482820974460786
- Micro F1: 0.9697853317600073
- Weighted F1: 0.9695237873890088
- Macro Precision: 0.9540948884759232
- Micro Precision: 0.9697853317600073
- Weighted Precision: 0.9694186941924757
- Macro Recall: 0.9428467518468838
- Micro Recall: 0.9697853317600073
- Weighted Recall: 0.9697853317600073
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Bu ürün gerçekten güzel çıktı"}' https://api-inference.huggingface.co/models/emre/turkish-sentiment-analysis
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emre/turkish-sentiment-analysis", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("emre/turkish-sentiment-analysis", use_auth_token=True)
inputs = tokenizer("Bu ürün gerçekten güzel çıktı", return_tensors="pt")
outputs = model(**inputs)
``` | 1,421 |
Mim/biobert-procell-demo | [
"accept",
"reject"
] | ---
tags: biobert
language: unk
widget:
- text: "Cell lines expressing proteins 🤗"
datasets:
- Mim/autotrain-data-biobert-procell
co2_eq_emissions: 0.5988414315305852
---
# Model Trained Using biobert
- Problem type: Binary Classification
- Model ID: 896229149
- CO2 Emissions (in grams): 0.5988414315305852
## Validation Metrics
- Loss: 0.4045306444168091
- Accuracy: 0.8028169014084507
- Precision: 0.8070175438596491
- Recall: 0.9387755102040817
- AUC: 0.8812615955473099
- F1: 0.8679245283018868
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Cell lines expressing proteins"}' https://api-inference.huggingface.co/models/Mim/autotrain-biobert-procell-896229149
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Mim/autotrain-biobert-procell-896229149", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Mim/autotrain-biobert-procell-896229149", use_auth_token=True)
inputs = tokenizer("Cell lines expressing proteins", return_tensors="pt")
outputs = model(**inputs)
``` | 1,220 |
Dafa/factcc | null | ---
license: afl-3.0
---
| 25 |
Xuan-Rui/pet-1000-iPT.p4PTmBERT | null | Entry not found | 15 |
asdc/roberta-base-biomedical-clinical-es-finetuned-text_classification | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | Entry not found | 15 |
waboucay/camembert-large-finetuned-xnli_fr_3_classes-finetuned-repnum_wl-rua_wl_3_classes | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 75.4 | 75.4 |
| test | 76.1 | 76.0 | | 367 |
Siddish/autotrain-yes-or-no-classifier-on-circa-1009033469 | [
"I am not sure how X will interpret Y’s answer",
"In the middle, neither yes nor no",
"No",
"Other",
"Probably no",
"Probably yes / sometimes yes",
"Yes",
"Yes, subject to some conditions"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Siddish/autotrain-data-yes-or-no-classifier-on-circa
co2_eq_emissions: 0.1287915253247826
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1009033469
- CO2 Emissions (in grams): 0.1287915253247826
## Validation Metrics
- Loss: 0.4084862470626831
- Accuracy: 0.8722054859679721
- Macro F1: 0.6340608446004876
- Micro F1: 0.8722054859679722
- Weighted F1: 0.8679846554644491
- Macro Precision: 0.645023001823007
- Micro Precision: 0.8722054859679721
- Weighted Precision: 0.8656545967138464
- Macro Recall: 0.6283763558287574
- Micro Recall: 0.8722054859679721
- Weighted Recall: 0.8722054859679721
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Siddish/autotrain-yes-or-no-classifier-on-circa-1009033469
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Siddish/autotrain-yes-or-no-classifier-on-circa-1009033469", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Siddish/autotrain-yes-or-no-classifier-on-circa-1009033469", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,471 |
sam34738/xlm-roberta-hindi-nisha | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ---
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-hindi-nisha
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-hindi-nisha
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-emotion](https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1429 | 1.0 | 460 | 0.7002 |
| 0.5404 | 2.0 | 920 | 0.5305 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
| 1,357 |
matanbn/smsPhishing | null | Entry not found | 15 |
baykenney/bert-base-gpt2detector-topp96 | [
"Human",
"Machine"
] | Entry not found | 15 |
bella/bert_finetuning_test | null | Entry not found | 15 |
blackbird/alberta-base-mnli-v1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | 0 | |
boychaboy/MNLI_distilbert-base-cased_2 | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
classla/roberta-base-frenk-hate | null | ---
language: "en"
tags:
- text-classification
- hate-speech
widget:
- text: "Gay is okay."
---
# roberta-base-frenk-hate
Text classification model based on [`roberta-base`](https://huggingface.co/roberta-base) and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the English subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
## Fine-tuning hyperparameters
Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
```python
model_args = {
"num_train_epochs": 6,
"learning_rate": 3e-6,
"train_batch_size": 69}
```
## Performance
The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
| model | average accuracy | average macro F1|
|---|---|---|
|roberta-base-frenk-hate|0.7915|0.7785|
|xlm-roberta-large |0.7904|0.77876|
|xlm-roberta-base |0.7577|0.7402|
|fasttext|0.725 |0.707 |
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with `xlm-roberta-base`:
| test | accuracy p-value | macro F1 p-value|
| --- | --- | --- |
|Wilcoxon|0.00781|0.00781|
|Mann Whithney U-test|0.00108|0.00108|
|Student t-test | 1.35e-08 | 1.05e-07|
Comparison with `xlm-roberta-large` yielded inconclusive results. `roberta-base` has average accuracy 0.7915, while `xlm-roberta-large` has average accuracy of 0.7904. If macro F1 scores were to be compared, `roberta-base` actually has lower average than `xlm-roberta-large`: 0.77852 vs 0.77876 respectively. The same statistical tests were performed with the premise that `roberta-base` has greater metrics, and the results are given below.
| test | accuracy p-value | macro F1 p-value|
| --- | --- | --- |
|Wilcoxon|0.188|0.406|
|Mann Whithey|0.375|0.649|
|Student t-test | 0.681| 0.934|
With reversed premise (i.e., that `xlm-roberta-large` has greater statistics) the Wilcoxon p-value for macro F1 scores for this case reaches 0.656, Mann-Whithey p-value is 0.399, and of course the Student p-value stays the same. It was therefore concluded that performance of the two models are not statistically significantly different from one another.
## Use examples
```python
from simpletransformers.classification import ClassificationModel
model_args = {
"num_train_epochs": 6,
"learning_rate": 3e-6,
"train_batch_size": 69}
model = ClassificationModel(
"roberta", "5roop/roberta-base-frenk-hate", use_cuda=True,
args=model_args
)
predictions, logit_output = model.predict(["Build the wall",
"Build the wall of trust"]
)
predictions
### Output:
### array([1, 0])
```
## Citation
If you use the model, please cite the following paper on which the original model is based:
```
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
and the dataset used for fine-tuning:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
```
| 4,315 |
cross-encoder/msmarco-MiniLM-L12-en-de-v1 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS MARCO - EN-DE
This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html).
The training code is available in this repository, see `train_script.py`.
## Usage with SentenceTransformers
When you have [SentenceTransformers](https://www.sbert.net/) installed, you can use the model like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
query = 'How many people live in Berlin?'
docs = ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.']
pairs = [(query, doc) for doc in docs]
scores = model.predict(pairs)
```
## Usage with Transformers
With the transformers library, you can use the model like this:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Performance
The performance was evaluated on three datasets:
- **TREC-DL19 EN-EN**: The original [TREC 2019 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning-2019.html): Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47.
- **TREC-DL19 DE-EN**: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10.
- **GermanDPR DE-DE**: The [GermanDPR](https://www.deepset.ai/germanquad) dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27.
We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search.
| Model-Name | TREC-DL19 EN-EN | TREC-DL19 DE-EN | GermanDPR DE-DE | Docs / Sec |
| ------------- |:-------------:| :-----: | :---: | :----: |
| BM25 | 45.46 | - | 35.85 | -|
| **Cross-Encoder Re-Rankers** | | | |
| [cross-encoder/msmarco-MiniLM-L6-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L6-en-de-v1) | 72.43 | 65.53 | 46.77 | 1600 |
| [cross-encoder/msmarco-MiniLM-L12-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L12-en-de-v1) | 72.94 | 66.07 | 49.91 | 900 |
| [svalabs/cross-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/cross-electra-ms-marco-german-uncased) (DE only) | - | - | 53.67 | 260 |
| [deepset/gbert-base-germandpr-reranking](https://huggingface.co/deepset/gbert-base-germandpr-reranking) (DE only) | - | - | 53.59 | 260 |
| **Bi-Encoders (re-ranking)** | | | |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned) | 63.38 | 58.28 | 37.88 | 940 |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch) | 65.51 | 58.69 | 38.32 | 940 |
| [svalabs/bi-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/bi-electra-ms-marco-german-uncased) (DE only) | - | - | 34.31 | 450 |
| [deepset/gbert-base-germandpr-question_encoder](https://huggingface.co/deepset/gbert-base-germandpr-question_encoder) (DE only) | - | - | 42.55 | 450 |
Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
| 4,796 |
m3hrdadfi/albert-fa-base-v2-sentiment-multi | [
"Negative",
"Neutral",
"Positive"
] | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
## Results
The model obtained an F1 score of 70.72% for a composition of all three datasets into a multi-labels `Negative`, `Neutral` and `Positive`.
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. | 1,938 |
prajjwal1/albert-base-v1-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | If you use the model, please consider citing this paper
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 352 |
Miniproject/BERT | [
"1 star",
"2 stars",
"3 stars",
"4 stars",
"5 stars"
] | ---
language:
- en
---
# Bert-base-uncased-sentiment
BERT stands for Bidirectional Encoder Representations from Transformers. It is a recent paper published by researchers at Google AI Language. BERT makes use of Transformer, an attention mechanism that learns contextual relations between words (or sub-words) in a text. In its vanilla form, Transformer includes two separate mechanisms — an encoder that reads the text input and a decoder that produces a prediction for the task. Since BERT’s goal is to generate a language model, only the encoder mechanism is necessary.
Bidirectional - to understand the text you’re looking you’ll have to look back (at the previous words) and forward (at the next words)
Transformers - The "Attention Is All You Need" paper presented the Transformer model. The Transformer reads entire sequences of tokens at once. In a sense, the model is non-directional, while LSTMs read sequentially (left-to-right or right-to-left). The attention mechanism allows for learning contextual relations between words.
(Pre-trained) contextualized word embeddings - The ELMO paper introduced a way to encode words based on their meaning/context. Nails has multiple meanings - fingernails and metal nails. BERT was trained by masking 15% of the tokens with the goal to guess them. An additional objective was to predict the next sentence. Let’s look at examples of these tasks:
Masked Language Modeling (Masked LM)
The objective of this task is to guess the masked tokens.
Before feeding word sequences into BERT, 15% of the words in each sentence are replaced with a masked. This means that it is converted to a token which is called "masked token". Then the job of BERT is to predict that hidden or masked word in the sentence by looking at the words (non-masked words) around that masked word. The model then attempts to predict the original value of the masked words, based on the context provided by the other, non-masked, words in the sequence.
That’s [mask] she [mask] -> That’s what she said
Next Sentence Prediction (NSP)
In this training process, BERT receives pairs of sentences as input and learns to predict if the second sentence in the pair of the first sentence (which means that the second sentence occurs just after the first sentence in our training corpus).
During training, 50% of the inputs are pairs in which the second sentence is the the pair of first sentence, while in the other 50%, it is just a random sentence from the corpus which is chosen as a second sentence. That means the other 50% doesn't forms a pair.
BERT Training Dataset
The training corpus was comprised of two entries: Toronto Book Corpus (800M words) and English Wikipedia (2,500M words). While the original Transformer has an encoder (for reading the input) and a decoder (that makes the prediction), BERT uses only the decoder.
BERT is simply a pre-trained stack of Transformer Encoders. How many Encoders? We have two versions - with 12 (BERT base) and 24 (BERT Large).BERT is based on stacked layers of encoders. The difference between BERT base and BERT large is on the number of encoder layers. BERT base model has 12 encoder layers stacked on top of each other whereas BERT large has 24 layers of encoders stacked on top of each other. BERT performs better than the other models. And BERT large increases the performance of BERT base further.
The BERT paper was released along with the source code and pre-trained models.
The best part is that you can do Transfer Learning (thanks to the ideas from OpenAI Transformer) with BERT for many NLP tasks - Classification, Question Answering, Entity Recognition, etc. You can train with small amounts of data and achieve great performance!
This a bert-base-uncased model finetuned for sentiment analysis on product reviews in the English language. It predicts the sentiment of the review as a number of stars (between 1 and 5).
This model is intended for direct use as a sentiment analysis model for product reviews, or for further finetuning on related sentiment analysis tasks.
## Training data
Here is the number of product reviews we used for finetuning the model:
| Language | Number of reviews |
| -------- | ----------------- |
| English | 150k |
## Accuracy
The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages:
- Accuracy (exact) is the exact match on the number of stars.
- Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer.
| Language | Accuracy (exact) | Accuracy (off-by-1) |
| -------- | ---------------------- | ------------------- |
| English | 67% | 95%
| 4,765 |
Bryan0123/bert-hashtag-to-hashtag | [
"#art",
"#beautiful",
"#fashion",
"#instagood",
"#instagram",
"#love",
"#nature",
"#photography",
"#photooftheday",
"#travel"
] | Entry not found | 15 |
Raychanan/Longformer_Conflict | null | training_args = TrainingArguments(
output_dir="./results",
learning_rate=5e-5,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=5,
weight_decay=0.01,
evaluation_strategy="epoch",
push_to_hub=True
) | 258 |
bomera/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9254067711979133
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2105
- Accuracy: 0.9255
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8201 | 1.0 | 250 | 0.2949 | 0.913 | 0.9114 |
| 0.2375 | 2.0 | 500 | 0.2105 | 0.9255 | 0.9254 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,807 |
Manishkalra/finetuning-movie-sentiment-model-9000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-movie-sentiment-model-9000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9177777777777778
- name: F1
type: f1
value: 0.9155251141552511
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-movie-sentiment-model-9000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4040
- Accuracy: 0.9178
- F1: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,533 |
Xuan-Rui/pet-1000-iPT.p4PTptBERT | null | Entry not found | 15 |
sahn/distilbert-base-uncased-finetuned-imdb-blur | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-imdb-blur
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9776
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb-blur
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1484
- Accuracy: 0.9776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Added `...` at the end of all the sentences with the label 1, and `;` with the label 0.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0662 | 1.0 | 1250 | 0.0524 | 0.9762 |
| 0.0365 | 2.0 | 2500 | 0.0683 | 0.9756 |
| 0.012 | 3.0 | 3750 | 0.0455 | 0.9906 |
| 0.0051 | 4.0 | 5000 | 0.1425 | 0.9742 |
| 0.001 | 5.0 | 6250 | 0.1484 | 0.9776 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,935 |
cardiffnlp/tweet-topic-19-single | [
"arts_&_culture",
"business_&_entrepreneurs",
"daily_life",
"pop_culture",
"science_&_technology",
"sports_&_gaming"
] | # tweet-topic-19-single
This is a roBERTa-base model trained on ~90m tweets until the end of 2019 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m)), and finetuned for single-label topic classification on a corpus of 6,997 tweets.
The original roBERTa-base model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
- 0 -> arts_&_culture;
- 1 -> business_&_entrepreneurs;
- 2 -> pop_culture;
- 3 -> daily_life;
- 4 -> sports_&_gaming;
- 5 -> science_&_technology
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
MODEL = f"cardiffnlp/tweet-topic-19-single"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
class_mapping = model.config.id2label
text = "Tesla stock is on the rise!"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# TF
#model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
#class_mapping = model.config.id2label
#text = "Tesla stock is on the rise!"
#encoded_input = tokenizer(text, return_tensors='tf')
#output = model(**encoded_input)
#scores = output[0][0]
#scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = class_mapping[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) business_&_entrepreneurs 0.8575
2) science_&_technology 0.0604
3) pop_culture 0.0295
4) daily_life 0.0217
5) sports_&_gaming 0.0154
6) arts_&_culture 0.0154
``` | 2,122 |
edmundhui/mental_health_trainer | [
"ADHD",
"OCD",
"aspergers",
"depression",
"ptsd"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mental_health_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental_health_trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [reddit_mental_health_posts](https://huggingface.co/datasets/solomonk/reddit_mental_health_posts)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,121 |
semy/finetuning-tweeteval-hate-speech | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-tweeteval-hate-speech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-tweeteval-hate-speech
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8397
- Accuracy: 0.0
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,188 |
anneke/finetuning-distilbert-base-uncased-5000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-distilbert-base-uncased-5000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-base-uncased-5000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1147
- Accuracy: 0.982
- F1: 0.9904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,231 |
CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | [
"البسيط",
"الخفيف",
"الدوبيت",
"الرجز",
"الرمل",
"السريع",
"السلسلة",
"الطويل",
"الكامل",
"المتدارك",
"المتقارب",
"المجتث",
"المديد",
"المضارع",
"المقتضب",
"المنسرح",
"المواليا",
"الهزج",
"الوافر",
"شعر التفعيلة",
"شعر حر",
"عامي",
"موشح"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-Mix Poetry Classification Model
## Model description
**CAMeLBERT-Mix Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9937475919723511},
{'label': 'الكامل', 'score': 0.971284031867981}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | 3,368 |
Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2 | null | ---
language: en
widget:
- text: "I got a rash from taking acetaminophen"
tags:
- sagemaker
- bert-base-uncased
- text classification
license: apache-2.0
datasets:
- adecorpusv2
model-index:
- name: BERT-ade_corpus
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: "ade_corpus_v2Ade_corpus_v2_classification"
type: ade_corpus
metrics:
- name: Validation Accuracy
type: accuracy
value: 92.98
- name: Validation F1
type: f1
value: 82.73
---
## bert-base-uncased
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
- Problem type: Text Classification(adverse drug effects detection).
## Hyperparameters
```json
{
"do_eval": true,
"do_train": true,
"fp16": true,
"load_best_model_at_end": true,
"model_name": "bert-base-uncased",
"num_train_epochs": 10,
"per_device_eval_batch_size": 16,
"per_device_train_batch_size": 16,
"learning_rate":5e-5
}
```
## Validation Metrics
| key | value |
| --- | ----- |
| eval_accuracy | 0.9298021697511167 |
| eval_auc | 0.8902672664394546 |
| eval_f1 | 0.827315541601256 |
| eval_loss | 0.17835010588169098 |
| eval_recall | 0.8234375 |
| eval_precision | 0.831230283911672 |
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I got a rash from taking acetaminophen"}' https://api-inference.huggingface.co/models/Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2
```
""" | 1,618 |
JovenPai/bert_finetunning_test | [
"LABEL_0",
"LABEL_1"
] | Entry not found | 15 |
Maha/hi-const21-hibert_final | null | Entry not found | 15 |
PubChimps/dlfBERT | null | Entry not found | 15 |
TehranNLP/bert-base-cased-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
airKlizz/gbert-base-germeval21-toxic-with-data-augmentation | null | Entry not found | 15 |
microsoft/tapex-large-finetuned-tabfact | [
"LABEL_0",
"LABEL_1"
] | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- tab_fact
license: mit
---
# TAPEX (large-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
This model is the `tapex-base` model fine-tuned on the [Tabfact](https://huggingface.co/datasets/tab_fact) dataset.
## Intended Uses
You can use the model for table fact verficiation.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForSequenceClassification
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
model = BartForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "beijing hosts the olympic games in 2012"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model(**encoding)
output_id = int(outputs.logits[0].argmax(dim=0))
print(model.config.id2label[output_id])
# Refused
```
### How to Eval
Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` | 2,576 |
palakagl/bert_MultiClass_TextClassification | [
"alarm_query",
"alarm_remove",
"alarm_set",
"audio_volume_down",
"audio_volume_mute",
"audio_volume_up",
"calendar_query",
"calendar_remove",
"calendar_set",
"cooking_recipe",
"datetime_convert",
"datetime_query",
"email_addcontact",
"email_query",
"email_querycontact",
"email_sendemail",
"general_affirm",
"general_commandstop",
"general_confirm",
"general_dontcare",
"general_explain",
"general_joke",
"general_negate",
"general_praise",
"general_quirky",
"general_repeat",
"iot_cleaning",
"iot_coffee",
"iot_hue_lightchange",
"iot_hue_lightdim",
"iot_hue_lightoff",
"iot_hue_lighton",
"iot_hue_lightup",
"iot_wemo_off",
"iot_wemo_on",
"lists_createoradd",
"lists_query",
"lists_remove",
"music_likeness",
"music_query",
"music_settings",
"news_query",
"play_audiobook",
"play_game",
"play_music",
"play_podcasts",
"play_radio",
"qa_currency",
"qa_definition",
"qa_factoid",
"qa_maths",
"qa_stock",
"recommendation_events",
"recommendation_locations",
"recommendation_movies",
"social_post",
"social_query",
"takeaway_order",
"takeaway_query",
"transport_query",
"transport_taxi",
"transport_ticket",
"transport_traffic",
"weather_query"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- palakagl/autotrain-data-PersonalAssitant
co2_eq_emissions: 5.080390550458655
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 717221775
- CO2 Emissions (in grams): 5.080390550458655
## Validation Metrics
- Loss: 0.35279911756515503
- Accuracy: 0.9269102990033222
- Macro F1: 0.9261839948926327
- Micro F1: 0.9269102990033222
- Weighted F1: 0.9263981751760975
- Macro Precision: 0.9273912049203341
- Micro Precision: 0.9269102990033222
- Weighted Precision: 0.9280084437800646
- Macro Recall: 0.927250645380574
- Micro Recall: 0.9269102990033222
- Weighted Recall: 0.9269102990033222
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/palakagl/autotrain-PersonalAssitant-717221775
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("palakagl/autotrain-PersonalAssitant-717221775", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("palakagl/autotrain-PersonalAssitant-717221775", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,418 |
okho0653/Bio_ClinicalBERT-zero-shot-sentiment-model | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Bio_ClinicalBERT-zero-shot-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT-zero-shot-sentiment-model
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,076 |
CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_42 | [
"0",
"1",
"2",
"3",
"4"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_66 | [
"0",
"1",
"2",
"3",
"4"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_88 | [
"0",
"1",
"2",
"3",
"4"
] | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_99 | [
"0",
"1",
"2",
"3",
"4"
] | Entry not found | 15 |
ziq/depression_tweet | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: depression_tweet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# depression_tweet
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
- Accuracy: 0.9565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 216 | 0.1369 | 0.9497 |
| No log | 2.0 | 432 | 0.1588 | 0.9552 |
| 0.0514 | 3.0 | 648 | 0.1647 | 0.9562 |
| 0.0514 | 4.0 | 864 | 0.1606 | 0.9565 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,547 |
jboomc/rotten_tomatoes_finetuned | [
"neg",
"pos"
] | Entry not found | 15 |
RomanCast/no_init_miam_loria_finetuned | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",
"LABEL_3",
"LABEL_30",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
language:
- fr
--- | 22 |
ArneD/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9218894133133121
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- Accuracy: 0.922
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8205 | 1.0 | 250 | 0.3028 | 0.909 | 0.9061 |
| 0.245 | 2.0 | 500 | 0.2147 | 0.922 | 0.9219 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,805 |
Anonymous1111/bert-base-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
license: apache-2.0
---
| 28 |
Elron/bleurt-base-128 | [
"LABEL_0"
] | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-128")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-128")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([0.3598, 0.0723])
```
| 999 |
bipin/malayalam-news-classifier | [
"business",
"entertainment",
"sports"
] | ---
license: mit
tags:
- text-classification
- roberta
- malayalam
- pytorch
widget:
- text: "2032 ഒളിമ്പിക്സിന് ബ്രിസ്ബെയ്ന് വേദിയാകും; ഗെയിംസിന് വേദിയാകുന്ന മൂന്നാമത്തെ ഓസ്ട്രേലിയന് നഗരം"
---
## Malayalam news classifier
### Overview
This model is trained on top of [MalayalamBert](https://huggingface.co/eliasedwin7/MalayalamBERT) for the task of classifying malayalam news headlines. Presently, the following news categories are supported:
* Business
* Sports
* Entertainment
### Dataset
The dataset used for training this model can be found [here](https://www.kaggle.com/disisbig/malyalam-news-dataset).
### Using the model with HF pipeline
```python
from transformers import pipeline
news_headline = "ക്രിപ്റ്റോ ഇടപാടുകളുടെ വിവരങ്ങൾ ആവശ്യപ്പെട്ട് ആദായനികുതി വകുപ്പ് നോട്ടീസയച്ചു"
model = pipeline(task="text-classification", model="bipin/malayalam-news-classifier")
model(news_headline)
# Output
# [{'label': 'business', 'score': 0.9979357123374939}]
```
### Contact
For feedback and questions, feel free to contact via twitter [@bkrish_](https://twitter.com/bkrish_) | 1,096 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.