modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
NYTK/sentiment-hts2-hubert-hungarian | null | ---
language:
- hu
tags:
- text-classification
license: gpl
metrics:
- accuracy
widget:
- text: "Jó reggelt! majd küldöm az élményhozókat :)."
---
# Hungarian Sentence-level Sentiment Analysis model with huBERT
For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained model used: huBERT
- Finetuned on Hungarian Twitter Sentiment (HTS) Corpus
- Labels: 1, 2
## Limitations
- max_seq_length = 128
## Results
| Model | HTS2 | HTS5 |
| ------------- | ------------- | ------------- |
| huBERT | **85.55** | 68.99 |
| XLM-RoBERTa| 85.56 | 85.56 |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {Improving Performance of Sentence-level Sentiment Analysis with Data Augmentation Methods},
booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)},
year = {2021},
publisher = {IEEE},
address = {Online},
author = {{Laki, László and Yang, Zijian Győző}}
pages = {417--422}
}
``` | 1,139 |
wonrax/phobert-base-vietnamese-sentiment | [
"NEG",
"NEU",
"POS"
] | ---
language:
- vi
tags:
- sentiment
- classification
license: mit
widget:
- text: "Không thể nào đẹp hơn"
- text: "Quá phí tiền, mà không đẹp"
- text: "Cái này giá ổn không nhỉ?"
---
[**GitHub Homepage**](https://github.com/wonrax/phobert-base-vietnamese-sentiment)
A model fine-tuned for sentiment analysis based on [vinai/phobert-base](https://huggingface.co/vinai/phobert-base).
Labels:
- NEG: Negative
- POS: Positive
- NEU: Neutral
Dataset: [30K e-commerce reviews](https://www.kaggle.com/datasets/linhlpv/vietnamese-sentiment-analyst)
## Usage
```python
import torch
from transformers import RobertaForSequenceClassification, AutoTokenizer
model = RobertaForSequenceClassification.from_pretrained("wonrax/phobert-base-vietnamese-sentiment")
tokenizer = AutoTokenizer.from_pretrained("wonrax/phobert-base-vietnamese-sentiment", use_fast=False)
# Just like PhoBERT: INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
sentence = 'Đây là mô_hình rất hay , phù_hợp với điều_kiện và như cầu của nhiều người .'
input_ids = torch.tensor([tokenizer.encode(sentence)])
with torch.no_grad():
out = model(input_ids)
print(out.logits.softmax(dim=-1).tolist())
# Output:
# [[0.002, 0.988, 0.01]]
# ^ ^ ^
# NEG POS NEU
```
| 1,267 |
g8a9/bert-base-cased_ami18 | null | Entry not found | 15 |
mgrella/autonlp-bank-transaction-classification-5521155 | [
"Category.BILLS_SUBSCRIPTIONS_BILLS",
"Category.BILLS_SUBSCRIPTIONS_INTERNET_PHONE",
"Category.BILLS_SUBSCRIPTIONS_OTHER",
"Category.BILLS_SUBSCRIPTIONS_SUBSCRIPTIONS",
"Category.CREDIT_CARDS_CREDIT_CARDS",
"Category.EATING_OUT_COFFEE_SHOPS",
"Category.EATING_OUT_OTHER",
"Category.EATING_OUT_RESTAURANTS",
"Category.EATING_OUT_TAKEAWAY_RESTAURANTS",
"Category.HEALTH_WELLNESS_AID_EXPENSES",
"Category.HEALTH_WELLNESS_DRUGS",
"Category.HEALTH_WELLNESS_GYMS",
"Category.HEALTH_WELLNESS_MEDICAL_EXPENSES",
"Category.HEALTH_WELLNESS_OTHER",
"Category.HEALTH_WELLNESS_WELLNESS_RELAX",
"Category.HOUSING_FAMILY_APPLIANCES",
"Category.HOUSING_FAMILY_CHILDHOOD",
"Category.HOUSING_FAMILY_FURNITURE",
"Category.HOUSING_FAMILY_GROCERIES",
"Category.HOUSING_FAMILY_INSURANCES",
"Category.HOUSING_FAMILY_MAINTENANCE_RENOVATION",
"Category.HOUSING_FAMILY_OTHER",
"Category.HOUSING_FAMILY_RENTS",
"Category.HOUSING_FAMILY_SERVANTS",
"Category.HOUSING_FAMILY_VETERINARY",
"Category.LEISURE_BOOKS",
"Category.LEISURE_CINEMA",
"Category.LEISURE_CLUB_ASSOCIATIONS",
"Category.LEISURE_GAMBLING",
"Category.LEISURE_MAGAZINES_NEWSPAPERS",
"Category.LEISURE_MOVIES_MUSICS",
"Category.LEISURE_OTHER",
"Category.LEISURE_SPORT_EVENTS",
"Category.LEISURE_THEATERS_CONCERTS",
"Category.LEISURE_VIDEOGAMES",
"Category.MORTGAGES_LOANS_LOANS",
"Category.MORTGAGES_LOANS_MORTGAGES",
"Category.OTHER_CASH",
"Category.OTHER_CHECKS",
"Category.OTHER_OTHER",
"Category.PROFITS_PROFITS",
"Category.SHOPPING_ACCESSORIZE",
"Category.SHOPPING_CLOTHING",
"Category.SHOPPING_FOOTWEAR",
"Category.SHOPPING_HI_TECH",
"Category.SHOPPING_OTHER",
"Category.SHOPPING_SPORT_ARTICLES",
"Category.TAXES_SERVICES_BANK_FEES",
"Category.TAXES_SERVICES_DEFAULT_PAYMENTS",
"Category.TAXES_SERVICES_MONEY_ORDERS",
"Category.TAXES_SERVICES_OTHER",
"Category.TAXES_SERVICES_PROFESSIONAL_ACTIVITY",
"Category.TAXES_SERVICES_PROFIT_DEDUCTION",
"Category.TAXES_SERVICES_TAXES",
"Category.TRANSFERS_BANK_TRANSFERS",
"Category.TRANSFERS_GIFTS_DONATIONS",
"Category.TRANSFERS_INVESTMENTS",
"Category.TRANSFERS_OTHER",
"Category.TRANSFERS_REFUNDS",
"Category.TRANSFERS_RENT_INCOMES",
"Category.TRANSFERS_SAVINGS",
"Category.TRAVELS_TRANSPORTATION_BUSES",
"Category.TRAVELS_TRANSPORTATION_CAR_RENTAL",
"Category.TRAVELS_TRANSPORTATION_FLIGHTS",
"Category.TRAVELS_TRANSPORTATION_FUEL",
"Category.TRAVELS_TRANSPORTATION_HOTELS",
"Category.TRAVELS_TRANSPORTATION_OTHER",
"Category.TRAVELS_TRANSPORTATION_PARKING_URBAN_TRANSPORTS",
"Category.TRAVELS_TRANSPORTATION_TAXIS",
"Category.TRAVELS_TRANSPORTATION_TOLLS",
"Category.TRAVELS_TRANSPORTATION_TRAINS",
"Category.TRAVELS_TRANSPORTATION_TRAVELS_HOLIDAYS",
"Category.TRAVELS_TRANSPORTATION_VEHICLE_MAINTENANCE",
"Category.WAGES_PENSION",
"Category.WAGES_PROFESSIONAL_COMPENSATION",
"Category.WAGES_SALARY"
] | ---
tags: autonlp
language: it
widget:
- text: "I love AutoNLP 🤗"
datasets:
- mgrella/autonlp-data-bank-transaction-classification
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 5521155
## Validation Metrics
- Loss: 1.3173143863677979
- Accuracy: 0.8220706757594545
- Macro F1: 0.5713688384455807
- Micro F1: 0.8220706757594544
- Weighted F1: 0.8217158913702755
- Macro Precision: 0.6064387992817253
- Micro Precision: 0.8220706757594545
- Weighted Precision: 0.8491515834140735
- Macro Recall: 0.5873349311175117
- Micro Recall: 0.8220706757594545
- Weighted Recall: 0.8220706757594545
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/mgrella/autonlp-bank-transaction-classification-5521155
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mgrella/autonlp-bank-transaction-classification-5521155", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mgrella/autonlp-bank-transaction-classification-5521155", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,366 |
Smith123/tiny-bert-sst2-distilled_L6_H128 | [
"negative",
"positive"
] | Entry not found | 15 |
Lurunchik/nf-cats | [
"NOT-A-QUESTION",
"FACTOID",
"DEBATE",
"EVIDENCE-BASED",
"INSTRUCTION",
"REASON",
"EXPERIENCE",
"COMPARISON"
] | ---
language:
- en
license: mit
tags:
- text-classification
inference: false
widget:
- text: "Why do we need an NFQA taxonomy?"
---
# Non Factoid Question Category classification in English
## NFQA model
Repository: [https://github.com/Lurunchik/NF-CATS](https://github.com/Lurunchik/NF-CATS)
Model trained with NFQA dataset. Base model is [roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2), a RoBERTa-based model for the task of Question Answering, fine-tuned using the SQuAD2.0 dataset.
Uses `NOT-A-QUESTION`, `FACTOID`, `DEBATE`, `EVIDENCE-BASED`, `INSTRUCTION`, `REASON`, `EXPERIENCE`, `COMPARISON` labels.
## How to use NFQA cat with HuggingFace
##### Load NFQA cat and its tokenizer:
```python
from transformers import AutoTokenizer
from nfqa_model import RobertaNFQAClassification
nfqa_model = RobertaNFQAClassification.from_pretrained("Lurunchik/nf-cats")
nfqa_tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
```
##### Make prediction using helper function:
```python
def get_nfqa_category_prediction(text):
output = nfqa_model(**nfqa_tokenizer(text, return_tensors="pt"))
index = output.logits.argmax()
return nfqa_model.config.id2label[int(index)]
get_nfqa_category_prediction('how to assign category?')
# result
#'INSTRUCTION'
```
## Demo
You can test the model via [hugginface space](https://huggingface.co/spaces/Lurunchik/nf-cats).
[](https://huggingface.co/spaces/Lurunchik/nf-cats)
## Citation
If you use `NFQA-cats` in your work, please cite [this paper](https://dl.acm.org/doi/10.1145/3477495.3531926)
```
@misc{bolotova2022nfcats,
author = {Bolotova, Valeriia and Blinov, Vladislav and Scholer, Falk and Croft, W. Bruce and Sanderson, Mark},
title = {A Non-Factoid Question-Answering Taxonomy},
year = {2022},
isbn = {9781450387323},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477495.3531926},
doi = {10.1145/3477495.3531926},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {1196–1207},
numpages = {12},
keywords = {question taxonomy, non-factoid question-answering, editorial study, dataset analysis},
location = {Madrid, Spain},
series = {SIGIR '22}
}
```
Enjoy! 🤗 | 2,463 |
Gerwin/bert-for-pac | null | ---
language:
- nl
tags:
- bert
- passive
- active
license: apache-2.0
---
## Dutch Fine-Tuned BERT For Passive/Active Voice Classification.
### Lijdende en Bedrijvende vorm classificatie voor zinnen
#### Examples
Try the following examples in the Hosted inference API:
1. Jan werd opgehaald door zijn moeder.
2. Wie niet weg is, is gezien
3. Ik ben van plan om morgen te gaan werken
4. De makelaar heeft het nieuwe huis verkocht aan de bewoners die iets verderop wonen.
5. De koekjes die mama had gemaakt waren door de jongens allemaal opgegeten.
LABEL_0 = Active / Bedrijvend. LABEL_1 = Passive / Lijdend
Answers (what they should be):
1. 1
2. 1
3. 0
4. 0
5. 1
#### Basic Information
This model is fine-tuned on [BERTje](https://huggingface.co/GroNLP/bert-base-dutch-cased) for recognizing passive and active voice in Dutch sentences.
Contact me at gerwindekruijf@gmail.com for further questions.
Gerwin | 916 |
HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary | [
"negative",
"positive"
] | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
**Binary:**
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
**Multi**
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
| Label | # |
|:---------:|:----:|
| Furious | 236 |
| Angry | 1357 |
| Neutral | 2874 |
| Happy | 2848 |
| Delighted | 2516 |
**Download**
You can download the dataset from:
- [SentiPers](https://github.com/phosseini/sentipers)
- [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| SentiPers (Multi Class) | 71.31* | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 92.42* | 92.13 | - | 91.98 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | 3,267 |
unicamp-dl/mMiniLM-L6-v2-mmarco-v2 | [
"LABEL_0"
] | ---
language: pt
license: mit
tags:
- msmarco
- miniLM
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# mMiniLM-L6-v2 Reranker finetuned on mMARCO
## Introduction
mMiniLM-L6-v2-mmarco-v2 is a multilingual miniLM-based model finetuned on a multilingual version of MS MARCO passage dataset. This dataset, named mMARCO, is formed by passages in 9 different languages, translated from English MS MARCO passages collection.
In the v2 version, the datasets were translated using Google Translate.
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import AutoTokenizer, AutoModel
model_name = 'unicamp-dl/mMiniLM-L6-v2-mmarco-v2'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
# Citation
If you use mMiniLM-L6-v2-mmarco-v2, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
| 1,506 |
Kayvane/distilbert-complaints-product | [
"Bank account or service",
"Checking or savings account",
"Consumer Loan",
"Credit card",
"Credit card or prepaid card",
"Credit reporting",
"Credit reporting, credit repair services, or other personal consumer reports",
"Debt collection",
"Money transfer, virtual currency, or money service",
"Money transfers",
"Mortgage",
"Other financial service",
"Payday loan",
"Payday loan, title loan, or personal loan",
"Prepaid card",
"Student loan",
"Vehicle loan or lease",
"Virtual currency"
] | ---
tags:
- generated_from_trainer
datasets:
- consumer_complaints
model-index:
- name: distilbert-complaints-product
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-complaints-product
This model was trained from the [CFBP](https://www.consumerfinance.gov/data-research/consumer-complaints/) dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided
## Model description
A DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint.
## Intended uses & limitations
This model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation:
- **Infrastructure:** Terraform
- **ML Ops:** HuggingFace (Datasets, Hub, Transformers)
- **Ml Explainability:** SHAP
- **Cloud:** AWS
- Model Hosting: Lambda
- DB Backend: DynamoDB
- Orchestration: Step-Functions
- UI Hosting: EC2
- Routing: API Gateway
- **UI:** Budibase
## Training and evaluation data
consumer_complaints dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| 1,711 |
Luyu/bert-base-mdoc-bm25 | [
"LABEL_0"
] | ---
language:
- en
tags:
- text reranking
license: apache-2.0
datasets:
- MS MARCO document ranking
---
# BERT Reranker for MS-MARCO Document Ranking
## Model description
A text reranker trained for BM25 retriever on MS MARCO document dataset.
## Intended uses & limitations
It is possible to work with other retrievers like but using aligned BM25 works the best.
We used anserini toolkit's BM25 implementation and indexed with tuned parameters (k1=3.8, b=0.87) following [this instruction](https://github.com/castorini/anserini/blob/master/docs/experiments-msmarco-doc.md).
#### How to use
See our [project repo page](https://github.com/luyug/Reranker).
## Eval results
MRR @10: 0.423 on Dev.
### BibTeX entry and citation info
```bibtex
@inproceedings{gao2021lce,
title={Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline},
author={Luyu Gao and Zhuyun Dai and Jamie Callan},
year={2021},
booktitle={The 43rd European Conference On Information Retrieval (ECIR)},
}
``` | 1,064 |
mohsenfayyaz/toxicity-classifier | null | [BERT base model (uncased)](https://huggingface.co/bert-base-uncased) fine tuned on [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) | 211 |
textattack/bert-base-uncased-STS-B | [
"LABEL_0"
] | Entry not found | 15 |
inovex/multi2convai-logistics-pl-bert | [
"details.address",
"tour.postcode.select",
"tour.finish",
"details.safeplace",
"details.preferedNeighbour",
"details.avoidNeighbour",
"tour.job.collected",
"no",
"yes",
"tour.start",
"tour.details",
"tour.job.signature",
"tour.job.delivered",
"select",
"tour.job.safePlace",
"safeplace",
"navigate",
"tour.job.carriedForward",
"tour.job.failed",
"help",
"navigate.back",
"undefined"
] | ---
tags:
- text-classification
widget:
- text: "gdzie mogę umieścić paczkę?"
license: mit
language: pl
---
# Multi2ConvAI-Logistics: finetuned Bert for Polish
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Polish (pl)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-pl-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-pl-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai | 981 |
Cameron/BERT-mdgender-wizard | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Ivo/emscad-skill-extraction | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
aware-ai/roberta-large-squad-classification | null | ---
datasets:
- squad_v2
---
# Roberta-LARGE finetuned on SQuADv2
This is roberta-large model finetuned on SQuADv2 dataset for question answering answerability classification
## Model details
This model is simply an Sequenceclassification model with two inputs (context and question) in a list.
The result is either [1] for answerable or [0] if it is not answerable.
It was trained over 4 epochs on squadv2 dataset and can be used to filter out which context is good to give into the QA model to avoid bad answers.
## Model training
This model was trained with following parameters using simpletransformers wrapper:
```
train_args = {
'learning_rate': 1e-5,
'max_seq_length': 512,
'overwrite_output_dir': True,
'reprocess_input_data': False,
'train_batch_size': 4,
'num_train_epochs': 4,
'gradient_accumulation_steps': 2,
'no_cache': True,
'use_cached_eval_features': False,
'save_model_every_epoch': False,
'output_dir': "bart-squadv2",
'eval_batch_size': 8,
'fp16_opt_level': 'O2',
}
```
## Results
```{"accuracy": 90.48%}```
## Model in Action 🚀
```python3
from simpletransformers.classification import ClassificationModel
model = ClassificationModel('roberta', 'a-ware/roberta-large-squadv2', num_labels=2, args=train_args)
predictions, raw_outputs = model.predict([["my dog is an year old. he loves to go into the rain", "how old is my dog ?"]])
print(predictions)
==> [1]
```
> Created with ❤️ by A-ware UG [](https://github.com/aware-ai)
| 1,597 |
tals/albert-base-vitaminc-mnli | [
"NOT ENOUGH INFO",
"REFUTES",
"SUPPORTS"
] | ---
language: python
datasets:
- fever
- glue
- multi_nli
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
| 2,369 |
PrimeQA/tydiqa-boolean-question-classifier | null | ---
license: apache-2.0
---
## Model description
A question type classification model based on multilingual BERT.
The question type classifier takes as input the question, and returns a label that distinguishes between boolean and short answer extractive questions.
The model was initialized with [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) and fine-tuned on the answerable subset of [TyDiQA](https://huggingface.co/datasets/tydiqa) train questions.
## Intended uses & limitations
You can use the raw model for question classification. Biases associated with the pre-existing language model, bert-base-multilingual-cased, may be present in our fine-tuned model, tydiqa-boolean-question-classifier.
## Usage
You can use this model directly in the the [PrimeQA](https://github.com/primeqa/primeqa) framework for supporting boolean question in reading comprehension as in this [example](https://github.com/primeqa/primeqa/tree/main/examples/boolqa).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@misc{https://doi.org/10.48550/arxiv.2206.08441,
author = {McCarley, Scott and
Bornea, Mihaela and
Rosenthal, Sara and
Ferritto, Anthony and
Sultan, Md Arafat and
Sil, Avirup and
Florian, Radu},
title = {GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions},
journal = {CoRR},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2206.08441},
}
``` | 2,206 |
Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly09 | [
"chitchat_ask_bye",
"chitchat_ask_hi",
"chitchat_ask_hi_de",
"chitchat_ask_hi_en",
"chitchat_ask_hi_fr",
"chitchat_ask_hoe_gaat_het",
"chitchat_ask_name",
"chitchat_ask_thanks",
"faq_ask_aantal_gevaccineerd",
"faq_ask_aantal_gevaccineerd_wereldwijd",
"faq_ask_afspraak_afzeggen",
"faq_ask_afspraak_gemist",
"faq_ask_algemeen_info",
"faq_ask_allergisch_na_vaccinatie",
"faq_ask_alternatieve_medicatie",
"faq_ask_andere_vaccins",
"faq_ask_astrazeneca",
"faq_ask_astrazeneca_bij_ouderen",
"faq_ask_astrazeneca_bloedklonters",
"faq_ask_astrazeneca_prik_2",
"faq_ask_attest",
"faq_ask_autisme_na_vaccinatie",
"faq_ask_auto-immuun",
"faq_ask_begeleiding",
"faq_ask_beschermen",
"faq_ask_beschermingsduur",
"faq_ask_beschermingspercentage",
"faq_ask_besmetten_na_vaccin",
"faq_ask_betalen_voor_vaccin",
"faq_ask_betrouwbaar",
"faq_ask_betrouwbare_bronnen",
"faq_ask_bijsluiter",
"faq_ask_bijwerking_AZ",
"faq_ask_bijwerking_JJ",
"faq_ask_bijwerking_algemeen",
"faq_ask_bijwerking_lange_termijn",
"faq_ask_bijwerking_moderna",
"faq_ask_bijwerking_pfizer",
"faq_ask_bloed_geven",
"faq_ask_borstvoeding",
"faq_ask_buitenlander",
"faq_ask_chronisch_ziek",
"faq_ask_combi",
"faq_ask_complottheorie",
"faq_ask_complottheorie_5G",
"faq_ask_complottheorie_Bill_Gates",
"faq_ask_contra_ind",
"faq_ask_corona_is_griep",
"faq_ask_corona_vermijden",
"faq_ask_covid_door_vaccin",
"faq_ask_curevac",
"faq_ask_derde_prik",
"faq_ask_dna",
"faq_ask_duur_vaccinatie",
"faq_ask_eerst_weigeren",
"faq_ask_eerste_prik_buitenland",
"faq_ask_essentieel_beroep",
"faq_ask_experimenteel",
"faq_ask_foetus",
"faq_ask_geen_antwoord",
"faq_ask_geen_risicopatient",
"faq_ask_geen_uitnodiging",
"faq_ask_gestockeerd",
"faq_ask_gezondheidstoestand_gekend",
"faq_ask_gif_in_vaccin",
"faq_ask_goedkeuring",
"faq_ask_groepsimmuniteit",
"faq_ask_hartspierontsteking",
"faq_ask_hersenziekte",
"faq_ask_hoe_dodelijk",
"faq_ask_hoe_weet_overheid",
"faq_ask_hoeveel_dosissen",
"faq_ask_huisarts",
"faq_ask_huisdieren",
"faq_ask_iedereen",
"faq_ask_in_vaccin",
"faq_ask_info_vaccins",
"faq_ask_janssen",
"faq_ask_janssen_een_dosis",
"faq_ask_jong_en_gezond",
"faq_ask_keuze",
"faq_ask_keuze_vaccinatiecentrum",
"faq_ask_kinderen",
"faq_ask_kosjer_halal",
"faq_ask_leveringen",
"faq_ask_logistiek",
"faq_ask_logistiek_veilig",
"faq_ask_magnetisch",
"faq_ask_man_vrouw_verschillen",
"faq_ask_mantelzorger",
"faq_ask_maximaal_een_dosis",
"faq_ask_meer_bijwerkingen_tweede_dosis",
"faq_ask_minder_mobiel",
"faq_ask_moderna",
"faq_ask_mondmasker",
"faq_ask_motiveren",
"faq_ask_mrna_vs_andere_vaccins",
"faq_ask_naaldangst",
"faq_ask_nadelen",
"faq_ask_nuchter",
"faq_ask_ontwikkeling",
"faq_ask_onvruchtbaar",
"faq_ask_oplopen_vaccinatie",
"faq_ask_pfizer",
"faq_ask_phishing",
"faq_ask_pijnstiller",
"faq_ask_planning_eerstelijnszorg",
"faq_ask_planning_ouderen",
"faq_ask_positieve_test_na_vaccin",
"faq_ask_prioritaire_gropen",
"faq_ask_privacy",
"faq_ask_probleem_registratie",
"faq_ask_problemen_uitnodiging",
"faq_ask_quarantaine",
"faq_ask_qvax_probleem",
"faq_ask_reproductiegetal",
"faq_ask_risicopatient",
"faq_ask_risicopatient_diabetes",
"faq_ask_risicopatient_hartvaat",
"faq_ask_risicopatient_immuunziekte",
"faq_ask_risicopatient_kanker",
"faq_ask_risicopatient_luchtwegaandoening",
"faq_ask_smaakverlies",
"faq_ask_snel_ontwikkeld",
"faq_ask_sneller_aan_de_beurt",
"faq_ask_taxi",
"faq_ask_test_voor_vaccin",
"faq_ask_testen",
"faq_ask_tijd_tot_tweede_dosis",
"faq_ask_timing_andere_vaccins",
"faq_ask_trage_start",
"faq_ask_tweede_dosis_afspraak",
"faq_ask_tweede_dosis_vervroegen",
"faq_ask_twijfel_bijwerkingen",
"faq_ask_twijfel_effectiviteit",
"faq_ask_twijfel_inhoud",
"faq_ask_twijfel_ivm_vaccinatie",
"faq_ask_twijfel_noodzaak",
"faq_ask_twijfel_ontwikkeling",
"faq_ask_twijfel_praktisch",
"faq_ask_twijfel_vaccins_zelf",
"faq_ask_twijfel_vrijheid",
"faq_ask_uit_flacon",
"faq_ask_uitnodiging_afspraak_kwijt",
"faq_ask_uitnodiging_na_vaccinatie",
"faq_ask_vaccin_doorgeven",
"faq_ask_vaccin_immuunsysteem",
"faq_ask_vaccin_variant",
"faq_ask_vaccinatiecentrum",
"faq_ask_vaccine_covid_gehad",
"faq_ask_vaccine_covid_gehad_effect",
"faq_ask_vakantie",
"faq_ask_veelgestelde_vragen",
"faq_ask_vegan",
"faq_ask_verplicht",
"faq_ask_verschillen",
"faq_ask_vrijwillig_Janssen",
"faq_ask_vrijwilliger",
"faq_ask_waar_en_wanneer",
"faq_ask_waarom",
"faq_ask_waarom_niet_verplicht",
"faq_ask_waarom_ouderen_eerst",
"faq_ask_waarom_twee_prikken",
"faq_ask_waarom_twijfel",
"faq_ask_wanneer_algemene_bevolking",
"faq_ask_wanneer_iedereen_gevaccineerd",
"faq_ask_wat_is_corona",
"faq_ask_wat_is_rna",
"faq_ask_wat_is_vaccin",
"faq_ask_wat_na_vaccinatie",
"faq_ask_welk_vaccin_krijg_ik",
"faq_ask_welke_vaccin",
"faq_ask_wie_ben_ik",
"faq_ask_wie_doet_inenting",
"faq_ask_wie_is_risicopatient",
"faq_ask_wie_nu",
"faq_ask_wilsonbekwaam",
"faq_ask_zwanger",
"get_started",
"nlu_fallback",
"test"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly09
This model is a fine-tuned version of [outputDAQonly09/](https://huggingface.co/outputDAQonly09/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4978
- Accuracy: 0.9031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 330 | 3.9692 | 0.2249 |
| 4.3672 | 2.0 | 660 | 3.1312 | 0.4031 |
| 4.3672 | 3.0 | 990 | 2.5068 | 0.5658 |
| 3.1495 | 4.0 | 1320 | 2.0300 | 0.6600 |
| 2.2491 | 5.0 | 1650 | 1.6517 | 0.7450 |
| 2.2491 | 6.0 | 1980 | 1.3604 | 0.7943 |
| 1.622 | 7.0 | 2310 | 1.1328 | 0.8327 |
| 1.1252 | 8.0 | 2640 | 0.9484 | 0.8611 |
| 1.1252 | 9.0 | 2970 | 0.8212 | 0.8757 |
| 0.7969 | 10.0 | 3300 | 0.7243 | 0.8830 |
| 0.5348 | 11.0 | 3630 | 0.6597 | 0.8867 |
| 0.5348 | 12.0 | 3960 | 0.5983 | 0.8857 |
| 0.3744 | 13.0 | 4290 | 0.5635 | 0.8976 |
| 0.2564 | 14.0 | 4620 | 0.5437 | 0.8985 |
| 0.2564 | 15.0 | 4950 | 0.5124 | 0.9013 |
| 0.1862 | 16.0 | 5280 | 0.5074 | 0.9022 |
| 0.1349 | 17.0 | 5610 | 0.5028 | 0.9049 |
| 0.1349 | 18.0 | 5940 | 0.4876 | 0.9077 |
| 0.0979 | 19.0 | 6270 | 0.4971 | 0.9049 |
| 0.0763 | 20.0 | 6600 | 0.4941 | 0.9022 |
| 0.0763 | 21.0 | 6930 | 0.4957 | 0.9049 |
| 0.0602 | 22.0 | 7260 | 0.4989 | 0.9049 |
| 0.0504 | 23.0 | 7590 | 0.4959 | 0.9040 |
| 0.0504 | 24.0 | 7920 | 0.4944 | 0.9031 |
| 0.0422 | 25.0 | 8250 | 0.4985 | 0.9040 |
| 0.0379 | 26.0 | 8580 | 0.4970 | 0.9049 |
| 0.0379 | 27.0 | 8910 | 0.4949 | 0.9040 |
| 0.0351 | 28.0 | 9240 | 0.4971 | 0.9040 |
| 0.0321 | 29.0 | 9570 | 0.4967 | 0.9031 |
| 0.0321 | 30.0 | 9900 | 0.4978 | 0.9031 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 3,194 |
sismetanin/rubert-ru-sentiment-rusentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## RuBERT-Base-ru-sentiment-RuSentiment
RuBERT-ru-sentiment-RuSentiment is a [RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@inproceedings{rogers2018rusentiment,
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
booktitle={Proceedings of the 27th international conference on computational linguistics},
pages={755--763},
year={2018}
}
``` | 6,333 |
deepset/gbert-large-sts | [
"LABEL_0"
] | ---
language: de
license: mit
tags:
- exbert
---
## Overview
**Language model:** gbert-large-sts
**Language:** German
**Training data:** German STS benchmark train and dev set
**Eval data:** German STS benchmark test set
**Infrastructure**: 1x V100 GPU
**Published**: August 12th, 2021
## Details
- We trained a gbert-large model on the task of estimating semantic similarity of German-language text pairs. The dataset is a machine-translated version of the [STS benchmark](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark), which is available [here](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark).
## Hyperparameters
```
batch_size = 16
n_epochs = 4
warmup_ratio = 0.1
learning_rate = 2e-5
lr_schedule = LinearWarmup
```
## Performance
Stay tuned... and watch out for new papers on arxiv.org ;)
## Authors
- Julian Risch: `julian.risch [at] deepset.ai`
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Gutsch: `julian.gutsch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
| 1,807 |
edumunozsala/RuPERTa_base_sentiment_analysis_es | [
"Negativo",
"Positivo"
] | ---
language: es
tags:
- sagemaker
- ruperta
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
model-index:
name: RuPERTa_base_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
- dataset:
name: "IMDb Reviews in Spanish"
type: IMDbreviews_es
- metrics:
- name: Accuracy,
type: accuracy,
value: 0.881866
- name: F1 Score,
type: f1,
value: 0.008272
- name: Precision,
type: precision,
value: 0.858605
- name: Recall,
type: recall,
value: 0.920062
widget:
- text: "Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
---
## Model `RuPERTa_base_sentiment_analysis_es`
### **A finetuned model for Sentiment analysis in Spanish**
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container,
The base model is **RuPERTa-base (uncased)** which is a RoBERTa model trained on a uncased version of big Spanish corpus.
It was trained by mrm8488, Manuel Romero.[Link to base model](https://huggingface.co/mrm8488/RuPERTa-base)
## Dataset
The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages.
Sizes of datasets:
- Train dataset: 42,500
- Validation dataset: 3,750
- Test dataset: 3,750
## Hyperparameters
{
"epochs": "4",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "3e-05",
"model_name": "\"mrm8488/RuPERTa-base\"",
"sagemaker_container_log_level": "20",
"sagemaker_program": "\"train.py\"",
}
## Evaluation results
Accuracy = 0.8629333333333333
F1 Score = 0.8648790746582545
Precision = 0.8479381443298969
Recall = 0.8825107296137339
## Test results
Accuracy = 0.8066666666666666
F1 Score = 0.8057862309134743
Precision = 0.7928307854507116
Recall = 0.8191721132897604
## Model in action
### Usage for Sentiment Analysis
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("edumunozsala/RuPERTa_base_sentiment_analysis_es")
model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/RuPERTa_base_sentiment_analysis_es")
text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
output = outputs.logits.argmax(1)
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
| 2,864 |
amandakonet/climatebert-fact-checking | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: mit
language:
- en
datasets: climate_fever
tags:
- fact-checking
- climate
- text entailment
---
This model fine-tuned [ClimateBert](https://huggingface.co/climatebert/distilroberta-base-climate-f) on the textual entailment task using Climate FEVER data. Given (claim, evidence) pairs, the model predicts support (entailment), refute (contradict), or not enough info (neutral). The model has 67% validation accuracy.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained("amandakonet/climatebert-fact-checking")
tokenizer = AutoTokenizer.from_pretrained("amandakonet/climatebert-fact-checking")
features = tokenizer(['Beginning in 2005, however, polar ice modestly receded for several years'],
['Polar Discovery "Continued Sea Ice Decline in 2005'],
padding='max_length', truncation=True, return_tensors="pt", max_length=512)
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
``` | 1,233 |
anshr/distilgpt2_reward_model_02 | null | Entry not found | 15 |
Team-PIXEL/pixel-base-finetuned-xnli-translate-train-all | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
- ar
- bg
- de
- el
- fr
- hi
- ru
- es
- sw
- th
- tr
- ur
- vi
- zh
tags:
- generated_from_trainer
datasets:
- xnli
metrics:
- accuracy
model-index:
- name: pixel-base-finetuned-xnli-translate-train-all
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: XNLI
type: xnli
args: xnli
metrics:
- name: Joint validation accuracy
type: accuracy
value: 0.6254886211512718
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-xnli-translate-train-all
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the XNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 8
- seed: 555
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 50000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
| 1,494 |
gilf/english-yelp-sentiment | [
"1 star",
"2 stars",
"3 stars",
"4 stars",
"5 stars"
] | Entry not found | 15 |
textattack/distilbert-base-cased-SST-2 | null | Entry not found | 15 |
CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment | [
"negative",
"neutral",
"positive"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT MSA SA Model
## Model description
**CAMeLBERT MSA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT MSA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | 3,373 |
lhoestq/distilbert-base-uncased-finetuned-absa-as | [
"NEGATIVE",
"POSITIVE"
] | Distilbert finetuned for Aspect-Based Sentiment Analysis (ABSA) with auxiliary sentence.
```bibtex
@inproceedings{sun-etal-2019-utilizing,
title = "Utilizing {BERT} for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence",
author = "Sun, Chi and
Huang, Luyao and
Qiu, Xipeng",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N19-1035",
doi = "10.18653/v1/N19-1035",
pages = "380--385",
abstract = "Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA). In this paper, we construct an auxiliary sentence from the aspect and convert ABSA to a sentence-pair classification task, such as question answering (QA) and natural language inference (NLI). We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on SentiHood and SemEval-2014 Task 4 datasets. The source codes are available at https://github.com/HSLCY/ABSA-BERT-pair.",
}
``` | 1,361 |
sgunderscore/hatescore-korean-hate-speech | [
"None",
"기타 혐오",
"남성",
"단순 악플",
"성소수자",
"여성/가족",
"연령",
"인종/국적",
"종교",
"지역"
] | Entry not found | 15 |
HannahRoseKirk/Hatemoji | null | ---
license: cc-by-4.0
language:
- en
tags:
- text-classification
- pytorch
- hate-speech-detection
datasets:
- HatemojiBuild
- HatemojiCheck
metrics:
- Accuracy, F1 Score
---
# Hatemoji Model
## Model description
This model is a fine-tuned version of the [DeBERTa base model](https://huggingface.co/microsoft/deberta-base). This model is cased. The model was trained on iterative rounds of adversarial data generation with human-and-model-in-the-loop. In each round, annotators are tasked with tricking the model-in-the-loop with emoji-containing statements that it will misclassify. Between each round, the model is retrained. This is the final model from the iterative process, referred to as R8-T in our paper. The intended task is to classify an emoji-containing statement as either non-hateful (LABEL 0.0) or hateful (LABEL 1.0).
- **Github Repository:** https://github.com/HannahKirk/Hatemoji
- **HuggingFace Datasets:** [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild) & [HatemojiCheck](https://huggingface.co/datasets/HannahRoseKirk/HatemojiCheck)
- **Paper:** https://arxiv.org/abs/2108.05921
- **Point of Contact:** hannah.kirk@oii.ox.ac.uk
## Intended uses & limitations
The intended use of the model is to classify English-language, emoji-containing, short-form text documents as a binary task: non-hateful vs hateful. The model has demonstrated strengths compared to commercial and academic models on classifying emoji-based hate, but is also a strong classifier of text-only hate. Because the model was trained on synthetic, adversarially-generated data, it may have some weaknesses when it comes to empirical emoji-based hate 'in-the-wild'.
You can interact with this model on [Dynabench](https://dynabench.org/tasks/hs), and find its limitations. We hope to continue improving the model on new adversarial data to better iron out its remaining weaknesses!
## How to use
The model can be used with pipeline:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='HannahRoseKirk/Hatemoji', return_all_scores=True)
prediction = classifier("I 💜💙💚 emoji 😍", )
print(prediction)
"""
Output
[[{'label': 'LABEL_0', 'score': 0.9999157190322876}, {'label': 'LABEL_1', 'score': 8.425049600191414e-05}]]
"""
```
### Training data
The model was trained on:
* The three rounds of emoji-containing, adversarially-generated texts from [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild)
* The four rounds of text-only, adversarially-generated texts from Vidgen et al., (2021). _Learning from the worst: Dynamically generated datasets to improve online hate detection_. Available on [Github](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset) and explained in their [paper](https://arxiv.org/abs/2012.15761).
* A collection of widely available and publicly accessible datasets from [https://hatespeechdata.com/](hatespeechdata.com)
## Train procedure
The model was trained using HuggingFace's [run glue script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py), using the following parameters:
```
python3 transformers/examples/pytorch/text-classification/run_glue.py \
--model_name_or_path microsoft/deberta-base \
--validation_file path_to_data/dev.csv \
--train_file path_to_data/train.csv \
--do_train --do_eval --max_seq_length 512 --learning_rate 2e-5 \
--num_train_epochs 3 --evaluation_strategy epoch \
--load_best_model_at_end --output_dir path_to_outdir/deberta123/ \
--seed 123 \
--cache_dir /.cache/huggingface/transformers/ \
--overwrite_output_dir > ./log_deb 2> ./err_deb
```
We experimented with upsampling the train split of each round to improve performance with increments of [1, 5, 10, 100], with the optimum upsampling taken
forward to all subsequent rounds. The optimal upsampling ratios for R1-R4 (text rounds from Vidgen et al.,) are carried forward. This model is trained on upsampling ratios of `{'R0':1, 'R1':5, 'R2':100, 'R3':1, 'R4':1 , 'R5':100, 'R6':1, 'R7':5}`.
## Variable and metrics
We wished to train a model which could effectively encode information about emoji-based hate, without worsening performance on text-only hate. Thus, we evaluate the model on:
* [HatemojiCheck](https://huggingface.co/datasets/HannahRoseKirk/HatemojiCheck), an evaluation checklist with 7 functionalities of emoji-based hate and contrast sets
* [HateCheck](https://huggingface.co/datasets/Paul/hatecheck), an evaluation checklist contains 29 functional tests for hate speech and contrast sets.
* The held-out tests sets from [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild) the three round of adversarially-generated data collection with emoji-containing examples (R5-7). Available on Huuggingface
* The held-out test sets from the four rounds of adversarially-generated data collection with text-only examples (R1-4, from [Vidgen et al.](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset))
For the round-specific test sets, we used a weighted F1-score across them to choose the final model in each round. For more details, see our [paper](https://arxiv.org/abs/2108.05921)
## Evaluation results
We compare our model to:
* **P-IA**: the identity attack attribute from Perspective API
* **P-TX**: the toxicity attribute from Perspective API
* **B-D**: A BERT model trained on the [Davidson et al. (2017)](https://github.com/t-davidson/hate-speech-and-offensive-language) dataset
* **B-F**: A BERT model trained on the [Founta et al. (2018)](https://github.com/ENCASEH2020/hatespeech-twitter) dataset
| | **Emoji Test Sets** | | | | **Text Test Sets** | | | | **All Rounds** | |
| :------- | :-----------------: | :--------: | :------------: | :--------: | :----------------: | :--------: | :-----------: | :--------: | :------------: | :--------: |
| | **R5-R7** | | **HmojiCheck** | | **R1-R4** | | **HateCheck** | | **R1-R7** | |
| | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** |
| **P-IA** | 0\.508 | 0\.394 | 0\.689 | 0\.754 | 0\.679 | 0\.720 | 0\.765 | 0\.839 | 0\.658 | 0\.689 |
| **P-TX** | 0\.523 | 0\.448 | 0\.650 | 0\.711 | 0\.602 | 0\.659 | 0\.720 | 0\.813 | 0\.592 | 0\.639 |
| **B-D** | 0\.489 | 0\.270 | 0\.578 | 0\.636 | 0\.589 | 0\.607 | 0\.632 | 0\.738 | 0\.591 | 0\.586 |
| **B-F** | 0\.496 | 0\.322 | 0\.552 | 0\.605 | 0\.562 | 0\.562 | 0\.602 | 0\.694 | 0\.557 | 0\.532 |
| **Hatemoji** | **0\.744** | **0\.755** | **0\.871** | **0\.904** | **0\.827** | **0\.844** | **0\.966** | **0\.975** | **0\.814** | **0\.829** |
For full discussion of the model results, see our [paper](https://arxiv.org/abs/2108.05921).
A recent [paper](https://arxiv.org/pdf/2202.11176.pdf) by Lees et al., (2022) _A New Generation of Perspective API:Efficient Multilingual Character-level Transformers_ beats this model on the HatemojiCheck benchmark. | 7,502 |
danielhou13/longformer-finetuned-news-cogs402 | null | Entry not found | 15 |
M47Labs/spanish_news_classification_headlines | [
"ciencia_tecnologia",
"clickbait",
"cultura",
"deportes",
"economia",
"educacion",
"medio_ambiente",
"opinion",
"politica",
"sociedad"
] | ---
widget:
- text: "El dólar se dispara tras la reunión de la Fed"
---
# Spanish News Classification Headlines
SNCH: this model was develop by [M47Labs](https://www.m47labs.com/es/) the goal is text classification, the base model use was [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased), it was fine-tuned on 1000 example dataset.
## Dataset Sample
Dataset size : 1000
Columns: idTask,task content 1,idTag,tag.
|idTask|task content 1|idTag|tag|
|------|------|------|------|
|3637d9ac-119c-4a8f-899c-339cf5b42ae0|Alcalá de Guadaíra celebra la IV Semana de la Diversidad Sexual con acciones de sensibilización|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad|
|d56bab52-0029-45dd-ad90-5c17d4ed4c88|El Archipiélago Chinijo Graciplus se impone en el Trofeo Centro Comercial Rubicón|ed198b6d-a5b9-4557-91ff-c0be51707dec|deportes|
|dec70bc5-4932-4fa2-aeac-31a52377be02|Un total de 39 personas padecen ELA actualmente en la provincia|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad|
|fb396ba9-fbf1-4495-84d9-5314eb731405|Eurocopa 2021 : Italia vence a Gales y pasa a octavos con su candidatura reforzada|ed198b6d-a5b9-4557-91ff-c0be51707dec|deportes|
|bc5a36ca-4e0a-422e-9167-766b41008c01|Resolución de 10 de junio de 2021, del Ayuntamiento de Tarazona de La Mancha (Albacete), referente a la convocatoria para proveer una plaza.|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad|
|a87f8703-ce34-47a5-9c1b-e992c7fe60f6|El primer ministro sueco pierde una moción de censura|209ae89e-55b4-41fd-aac0-5400feab479e|politica|
|d80bdaad-0ad5-43a0-850e-c473fd612526|El dólar se dispara tras la reunión de la Fed|11925830-148e-4890-a2bc-da9dc059dc17|economia|
## Labels:
* ciencia_tecnologia
* clickbait
* cultura
* deportes
* economia
* educacion
* medio_ambiente
* opinion
* politica
* sociedad
## Example of Use
### Pipeline
```{python}
import torch
from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline
review_text = 'los vehiculos que esten esperando pasajaeros deberan estar apagados para reducir emisiones'
path = "M47Labs/spanish_news_classification_headlines"
tokenizer = AutoTokenizer.from_pretrained(path)
model = BertForSequenceClassification.from_pretrained(path)
nlp = TextClassificationPipeline(task = "text-classification",
model = model,
tokenizer = tokenizer)
print(nlp(review_text))
```
```[{'label': 'medio_ambiente', 'score': 0.5648820996284485}]```
### Pytorch
```{python}
import torch
from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline
from numpy import np
model_name = 'M47Labs/spanish_news_classification_headlines'
MAX_LEN = 32
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
texto = "las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno"
encoded_review = tokenizer.encode_plus(
texto,
max_length=MAX_LEN,
add_special_tokens=True,
#return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
input_ids = encoded_review['input_ids']
attention_mask = encoded_review['attention_mask']
output = model(input_ids, attention_mask)
_, prediction = torch.max(output['logits'], dim=1)
print(f'Review text: {texto}')
print(f'Sentiment : {model.config.id2label[prediction.detach().cpu().numpy()[0]]}')
```
```Review text: las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno```
```Sentiment : medio_ambiente```
A more in depth example on how to use the model can be found in this colab notebook: https://colab.research.google.com/drive/1XsKea6oMyEckye2FePW_XN7Rf8v41Cw_?usp=sharing
## Finetune Hyperparameters
* MAX_LEN = 32
* TRAIN_BATCH_SIZE = 8
* VALID_BATCH_SIZE = 4
* EPOCHS = 5
* LEARNING_RATE = 1e-05
## Train Results
|n_example|epoch|loss|acc|
|------|------|------|------|
|100|0|2.286327266693115|12.5|
|100|1|2.018876111507416|40.0|
|100|2|1.8016730904579163|43.75|
|100|3|1.6121837735176086|46.25|
|100|4|1.41565443277359|68.75|
|n_example|epoch|loss|acc|
|------|------|------|------|
|500|0|2.0770938420295715|24.5|
|500|1|1.6953029704093934|50.25|
|500|2|1.258900796175003|64.25|
|500|3|0.8342628020048142|78.25|
|500|4|0.5135736921429634|90.25|
|n_example|epoch|loss|acc|
|------|------|------|------|
|1000|0|1.916002897115854|36.1997226074896|
|1000|1|1.2941598492664295|62.2746185852982|
|1000|2|0.8201534710415117|76.97642163661581|
|1000|3|0.524806430051615|86.9625520110957|
|1000|4|0.30662027455784463|92.64909847434119|
## Validation Results
|n_examples|100|
|------|------|
|Accuracy Score|0.35|
|Precision (Macro)|0.35|
|Recall (Macro)|0.16|
|n_examples|500|
|------|------|
|Accuracy Score|0.62|
|Precision (Macro)|0.60|
|Recall (Macro)|0.47|
|n_examples|1000|
|------|------|
|Accuracy Score|0.68|
|Precision(Macro)|0.68|
|Recall (Macro)|0.64|

| 5,194 |
anirudh21/albert-base-v2-finetuned-qnli | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-base-v2-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9112209408749771
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-qnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3194
- Accuracy: 0.9112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3116 | 1.0 | 6547 | 0.2818 | 0.8849 |
| 0.2467 | 2.0 | 13094 | 0.2532 | 0.9001 |
| 0.1858 | 3.0 | 19641 | 0.3194 | 0.9112 |
| 0.1449 | 4.0 | 26188 | 0.4338 | 0.9103 |
| 0.0584 | 5.0 | 32735 | 0.5752 | 0.9052 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,839 |
Intel/xlnet-base-cased-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: xlnet-base-cased-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8455882352941176
- name: F1
type: f1
value: 0.8896672504378283
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-mrpc
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7156
- Accuracy: 0.8456
- F1: 0.8897
- Combined Score: 0.8676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
| 1,509 |
allenai/multicite-multilabel-scibert | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ---
language: en
tags:
- scibert
license: mit
---
# MultiCite: Multi-label Citation Intent Classification with SciBERT (NAACL 2022)
This model has been trained on the data available here: https://github.com/allenai/multicite | 227 |
ajrae/bert-base-uncased-finetuned-mrpc | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8578431372549019
- name: F1
type: f1
value: 0.9003436426116839
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4520
- Accuracy: 0.8578
- F1: 0.9003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.4169 | 0.8039 | 0.8639 |
| No log | 2.0 | 460 | 0.4299 | 0.8137 | 0.875 |
| 0.4242 | 3.0 | 690 | 0.4520 | 0.8578 | 0.9003 |
| 0.4242 | 4.0 | 920 | 0.6323 | 0.8431 | 0.8926 |
| 0.1103 | 5.0 | 1150 | 0.6163 | 0.8578 | 0.8997 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,987 |
federicopascual/finetuned-sentiment-analysis-model | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- precision
- recall
model-index:
- name: finetuned-sentiment-analysis-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.909
- name: Precision
type: precision
value: 0.8899803536345776
- name: Recall
type: recall
value: 0.9282786885245902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-sentiment-analysis-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2868
- Accuracy: 0.909
- Precision: 0.8900
- Recall: 0.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,622 |
gchhablani/bert-base-cased-finetuned-cola | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-cased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5956649094312695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6747
- Matthews Correlation: 0.5957
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name cola \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-cola \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4921 | 1.0 | 535 | 0.5283 | 0.5068 |
| 0.2837 | 2.0 | 1070 | 0.5133 | 0.5521 |
| 0.1775 | 3.0 | 1605 | 0.6747 | 0.5957 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| 2,750 |
hsaglamlar/autotrain-stress-1106740293 | [
"0",
"1"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- hsaglamlar/autotrain-data-stress
co2_eq_emissions: 0.009057639447268492
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1106740293
- CO2 Emissions (in grams): 0.009057639447268492
## Validation Metrics
- Loss: 0.40180888772010803
- Accuracy: 0.8261904761904761
- Precision: 0.7195767195767195
- Recall: 0.8717948717948718
- AUC: 0.9021100427350428
- F1: 0.7884057971014493
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/hsaglamlar/autotrain-stress-1106740293
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hsaglamlar/autotrain-stress-1106740293", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hsaglamlar/autotrain-stress-1106740293", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,182 |
JuliusAlphonso/distilbert-plutchik | [
"anger",
"anticipation",
"disgust",
"fear",
"joy",
"neutral",
"sadness",
"surprise",
"trust"
] | Labels are based on Plutchik's model of emotions and may be combined:
 | 181 |
akhooli/xlm-r-large-arabic-toxic | [
"LABEL_0_negative",
"LABEL_1_positive"
] | ---
language:
- ar
- en
license: mit
---
### xlm-r-large-arabic-toxic (toxic/hate speech classifier)
Toxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large.
Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English).
Usage and further info: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
| 423 |
Anudev08/model_3 | null | Entry not found | 15 |
LilaBoualili/bert-vanilla | null | At its core it uses a BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking. | 312 |
soleimanian/financial-roberta-large-sentiment | [
"negative",
"neutral",
"positive"
] | ---
license: apache-2.0
language:
- English
tags:
- text-classification
- Sentiment
- RoBERTa
- Financial Statements
- Accounting
- Finance
- Business
- ESG
- CSR Reports
- Financial News
- Earnings Call Transcripts
- Sustainability
- Corporate governance
---
<!DOCTYPE html>
<html>
<body>
<h1><b>Financial-RoBERTa</b></h1>
<p><b>Financial-RoBERTa</b> is a pre-trained NLP model to analyze sentiment of financial text including:</p>
<ul style="PADDING-LEFT: 40px">
<li>Financial Statements,</li>
<li>Earnings Announcements,</li>
<li>Earnings Call Transcripts,</li>
<li>Corporate Social Responsibility (CSR) Reports,</li>
<li>Environmental, Social, and Governance (ESG) News,</li>
<li>Financial News,</li>
<li>Etc.</li>
</ul>
<p>Financial-RoBERTa is built by further training and fine-tuning the RoBERTa Large language model using a large corpus created from 10k, 10Q, 8K, Earnings Call Transcripts, CSR Reports, ESG News, and Financial News text.</p>
<p>The model will give softmax outputs for three labels: <b>Positive</b>, <b>Negative</b> or <b>Neutral</b>.</p>
<p><b>How to perform sentiment analysis:</b></p>
<p>The easiest way to use the model for single predictions is Hugging Face's sentiment analysis pipeline, which only needs a couple lines of code as shown in the following example:</p>
<pre>
<code>
from transformers import pipeline
sentiment_analysis = pipeline("sentiment-analysis",model="soleimanian/financial-roberta-large-sentiment")
print(sentiment_analysis("In fiscal 2021, we generated a net yield of approximately 4.19% on our investments, compared to approximately 5.10% in fiscal 2020."))
</code>
</pre>
<p>I provide an example script via <a href="https://colab.research.google.com/drive/11RGWU3UDtxnjan8Ug6dyX82m9fBV6CGo?usp=sharing" target="_blank">Google Colab</a>. You can load your data to a Google Drive and run the script for free on a Colab.
<p><b>Citation and contact:</b></p>
<p>Please cite <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4115943" target="_blank">this paper</a> when you use the model. Feel free to reach out to mohammad.soleimanian@concordia.ca with any questions or feedback you may have.<p/>
</body>
</html>
| 2,197 |
tornqvistmax/7cats_finetuned | null | Entry not found | 15 |
IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_100",
"LABEL_101",
"LABEL_102",
"LABEL_103",
"LABEL_104",
"LABEL_105",
"LABEL_106",
"LABEL_107",
"LABEL_108",
"LABEL_109",
"LABEL_11",
"LABEL_110",
"LABEL_111",
"LABEL_112",
"LABEL_113",
"LABEL_114",
"LABEL_115",
"LABEL_116",
"LABEL_117",
"LABEL_118",
"LABEL_119",
"LABEL_12",
"LABEL_120",
"LABEL_121",
"LABEL_122",
"LABEL_123",
"LABEL_124",
"LABEL_125",
"LABEL_126",
"LABEL_127",
"LABEL_128",
"LABEL_129",
"LABEL_13",
"LABEL_130",
"LABEL_131",
"LABEL_132",
"LABEL_133",
"LABEL_134",
"LABEL_135",
"LABEL_136",
"LABEL_137",
"LABEL_138",
"LABEL_139",
"LABEL_14",
"LABEL_140",
"LABEL_141",
"LABEL_142",
"LABEL_143",
"LABEL_144",
"LABEL_145",
"LABEL_146",
"LABEL_147",
"LABEL_148",
"LABEL_149",
"LABEL_15",
"LABEL_150",
"LABEL_151",
"LABEL_152",
"LABEL_153",
"LABEL_154",
"LABEL_155",
"LABEL_156",
"LABEL_157",
"LABEL_158",
"LABEL_159",
"LABEL_16",
"LABEL_160",
"LABEL_161",
"LABEL_162",
"LABEL_163",
"LABEL_164",
"LABEL_165",
"LABEL_166",
"LABEL_167",
"LABEL_168",
"LABEL_169",
"LABEL_17",
"LABEL_170",
"LABEL_171",
"LABEL_172",
"LABEL_173",
"LABEL_174",
"LABEL_175",
"LABEL_176",
"LABEL_177",
"LABEL_178",
"LABEL_179",
"LABEL_18",
"LABEL_180",
"LABEL_181",
"LABEL_182",
"LABEL_183",
"LABEL_184",
"LABEL_185",
"LABEL_186",
"LABEL_187",
"LABEL_188",
"LABEL_189",
"LABEL_19",
"LABEL_190",
"LABEL_191",
"LABEL_192",
"LABEL_193",
"LABEL_194",
"LABEL_195",
"LABEL_196",
"LABEL_197",
"LABEL_198",
"LABEL_199",
"LABEL_2",
"LABEL_20",
"LABEL_200",
"LABEL_201",
"LABEL_202",
"LABEL_203",
"LABEL_204",
"LABEL_205",
"LABEL_206",
"LABEL_207",
"LABEL_208",
"LABEL_209",
"LABEL_21",
"LABEL_210",
"LABEL_211",
"LABEL_212",
"LABEL_213",
"LABEL_214",
"LABEL_215",
"LABEL_216",
"LABEL_217",
"LABEL_218",
"LABEL_219",
"LABEL_22",
"LABEL_220",
"LABEL_221",
"LABEL_222",
"LABEL_223",
"LABEL_224",
"LABEL_225",
"LABEL_226",
"LABEL_227",
"LABEL_228",
"LABEL_229",
"LABEL_23",
"LABEL_230",
"LABEL_231",
"LABEL_232",
"LABEL_233",
"LABEL_234",
"LABEL_235",
"LABEL_236",
"LABEL_237",
"LABEL_238",
"LABEL_239",
"LABEL_24",
"LABEL_240",
"LABEL_241",
"LABEL_242",
"LABEL_243",
"LABEL_244",
"LABEL_245",
"LABEL_246",
"LABEL_247",
"LABEL_248",
"LABEL_249",
"LABEL_25",
"LABEL_250",
"LABEL_251",
"LABEL_252",
"LABEL_253",
"LABEL_254",
"LABEL_255",
"LABEL_256",
"LABEL_257",
"LABEL_258",
"LABEL_259",
"LABEL_26",
"LABEL_260",
"LABEL_261",
"LABEL_262",
"LABEL_263",
"LABEL_264",
"LABEL_265",
"LABEL_266",
"LABEL_267",
"LABEL_268",
"LABEL_269",
"LABEL_27",
"LABEL_270",
"LABEL_271",
"LABEL_272",
"LABEL_273",
"LABEL_274",
"LABEL_275",
"LABEL_276",
"LABEL_277",
"LABEL_278",
"LABEL_279",
"LABEL_28",
"LABEL_280",
"LABEL_281",
"LABEL_282",
"LABEL_283",
"LABEL_284",
"LABEL_285",
"LABEL_286",
"LABEL_287",
"LABEL_288",
"LABEL_289",
"LABEL_29",
"LABEL_290",
"LABEL_291",
"LABEL_292",
"LABEL_293",
"LABEL_294",
"LABEL_295",
"LABEL_296",
"LABEL_297",
"LABEL_298",
"LABEL_299",
"LABEL_3",
"LABEL_30",
"LABEL_300",
"LABEL_301",
"LABEL_302",
"LABEL_303",
"LABEL_304",
"LABEL_305",
"LABEL_306",
"LABEL_307",
"LABEL_308",
"LABEL_309",
"LABEL_31",
"LABEL_310",
"LABEL_311",
"LABEL_312",
"LABEL_313",
"LABEL_314",
"LABEL_315",
"LABEL_316",
"LABEL_317",
"LABEL_318",
"LABEL_319",
"LABEL_32",
"LABEL_320",
"LABEL_321",
"LABEL_322",
"LABEL_323",
"LABEL_324",
"LABEL_325",
"LABEL_326",
"LABEL_327",
"LABEL_328",
"LABEL_329",
"LABEL_33",
"LABEL_330",
"LABEL_331",
"LABEL_332",
"LABEL_333",
"LABEL_334",
"LABEL_335",
"LABEL_336",
"LABEL_337",
"LABEL_338",
"LABEL_339",
"LABEL_34",
"LABEL_340",
"LABEL_341",
"LABEL_342",
"LABEL_343",
"LABEL_344",
"LABEL_345",
"LABEL_346",
"LABEL_347",
"LABEL_348",
"LABEL_349",
"LABEL_35",
"LABEL_350",
"LABEL_351",
"LABEL_352",
"LABEL_353",
"LABEL_354",
"LABEL_355",
"LABEL_356",
"LABEL_357",
"LABEL_358",
"LABEL_359",
"LABEL_36",
"LABEL_360",
"LABEL_361",
"LABEL_362",
"LABEL_363",
"LABEL_364",
"LABEL_365",
"LABEL_366",
"LABEL_367",
"LABEL_368",
"LABEL_369",
"LABEL_37",
"LABEL_370",
"LABEL_371",
"LABEL_372",
"LABEL_373",
"LABEL_374",
"LABEL_375",
"LABEL_376",
"LABEL_377",
"LABEL_378",
"LABEL_379",
"LABEL_38",
"LABEL_380",
"LABEL_381",
"LABEL_382",
"LABEL_383",
"LABEL_384",
"LABEL_385",
"LABEL_386",
"LABEL_387",
"LABEL_388",
"LABEL_389",
"LABEL_39",
"LABEL_390",
"LABEL_391",
"LABEL_392",
"LABEL_393",
"LABEL_394",
"LABEL_395",
"LABEL_396",
"LABEL_397",
"LABEL_398",
"LABEL_399",
"LABEL_4",
"LABEL_40",
"LABEL_400",
"LABEL_401",
"LABEL_402",
"LABEL_403",
"LABEL_404",
"LABEL_405",
"LABEL_406",
"LABEL_407",
"LABEL_408",
"LABEL_409",
"LABEL_41",
"LABEL_410",
"LABEL_411",
"LABEL_412",
"LABEL_413",
"LABEL_414",
"LABEL_415",
"LABEL_416",
"LABEL_417",
"LABEL_418",
"LABEL_419",
"LABEL_42",
"LABEL_420",
"LABEL_421",
"LABEL_422",
"LABEL_423",
"LABEL_424",
"LABEL_425",
"LABEL_426",
"LABEL_427",
"LABEL_428",
"LABEL_429",
"LABEL_43",
"LABEL_430",
"LABEL_431",
"LABEL_432",
"LABEL_433",
"LABEL_434",
"LABEL_435",
"LABEL_436",
"LABEL_437",
"LABEL_438",
"LABEL_439",
"LABEL_44",
"LABEL_440",
"LABEL_441",
"LABEL_442",
"LABEL_443",
"LABEL_444",
"LABEL_445",
"LABEL_446",
"LABEL_447",
"LABEL_448",
"LABEL_449",
"LABEL_45",
"LABEL_450",
"LABEL_451",
"LABEL_452",
"LABEL_453",
"LABEL_454",
"LABEL_455",
"LABEL_456",
"LABEL_457",
"LABEL_458",
"LABEL_459",
"LABEL_46",
"LABEL_460",
"LABEL_461",
"LABEL_462",
"LABEL_463",
"LABEL_464",
"LABEL_465",
"LABEL_466",
"LABEL_467",
"LABEL_468",
"LABEL_469",
"LABEL_47",
"LABEL_470",
"LABEL_471",
"LABEL_472",
"LABEL_473",
"LABEL_474",
"LABEL_475",
"LABEL_476",
"LABEL_477",
"LABEL_478",
"LABEL_479",
"LABEL_48",
"LABEL_480",
"LABEL_481",
"LABEL_482",
"LABEL_483",
"LABEL_484",
"LABEL_485",
"LABEL_486",
"LABEL_487",
"LABEL_488",
"LABEL_489",
"LABEL_49",
"LABEL_490",
"LABEL_491",
"LABEL_492",
"LABEL_493",
"LABEL_494",
"LABEL_495",
"LABEL_496",
"LABEL_497",
"LABEL_498",
"LABEL_499",
"LABEL_5",
"LABEL_50",
"LABEL_500",
"LABEL_501",
"LABEL_502",
"LABEL_503",
"LABEL_504",
"LABEL_505",
"LABEL_506",
"LABEL_507",
"LABEL_508",
"LABEL_509",
"LABEL_51",
"LABEL_510",
"LABEL_511",
"LABEL_52",
"LABEL_53",
"LABEL_54",
"LABEL_55",
"LABEL_56",
"LABEL_57",
"LABEL_58",
"LABEL_59",
"LABEL_6",
"LABEL_60",
"LABEL_61",
"LABEL_62",
"LABEL_63",
"LABEL_64",
"LABEL_65",
"LABEL_66",
"LABEL_67",
"LABEL_68",
"LABEL_69",
"LABEL_7",
"LABEL_70",
"LABEL_71",
"LABEL_72",
"LABEL_73",
"LABEL_74",
"LABEL_75",
"LABEL_76",
"LABEL_77",
"LABEL_78",
"LABEL_79",
"LABEL_8",
"LABEL_80",
"LABEL_81",
"LABEL_82",
"LABEL_83",
"LABEL_84",
"LABEL_85",
"LABEL_86",
"LABEL_87",
"LABEL_88",
"LABEL_89",
"LABEL_9",
"LABEL_90",
"LABEL_91",
"LABEL_92",
"LABEL_93",
"LABEL_94",
"LABEL_95",
"LABEL_96",
"LABEL_97",
"LABEL_98",
"LABEL_99"
] | ---
license: apache-2.0
# inference: false
# pipeline_tag: zero-shot-image-classification
pipeline_tag: feature-extraction
# inference:
# parameters:
tags:
- clip
- zh
- image-text
- feature-extraction
---
# Model Details
This model is a Chinese CLIP model trained on [Noah-Wukong Dataset](https://wukong-dataset.github.io/wukong-dataset/), which contains about 100M Chinese image-text pairs. We use ViT-B-32 from [openAI](https://github.com/openai/CLIP) as image encoder and Chinese pre-trained language model [chinese-roberta-wwm](https://huggingface.co/hfl/chinese-roberta-wwm-ext) as text encoder. We freeze the image encoder and only finetune the text encoder. The model was trained for 20 epochs and it takes about 10 days with 8 A100 GPUs.
# Taiyi (太乙)
Taiyi models are a branch of the Fengshenbang (封神榜) series of models. The models in Taiyi are pre-trained with multimodal pre-training strategies. We will release more image-text model trained on Chinese dataset and benefit the Chinese community.
# Usage
```python3
from PIL import Image
import requests
import clip
import torch
from transformers import BertForSequenceClassification, BertConfig, BertTokenizer
from transformers import CLIPProcessor, CLIPModel
import numpy as np
query_texts = ["一只猫", "一只狗",'两只猫', '两只老虎','一只老虎'] # 这里是输入文本的,可以随意替换。
# 加载Taiyi 中文 text encoder
text_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese")
text_encoder = BertForSequenceClassification.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese").eval()
text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids']
url = "http://images.cocodataset.org/val2017/000000039769.jpg" # 这里可以换成任意图片的url
# 加载CLIP的image encoder
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
image = processor(images=Image.open(requests.get(url, stream=True).raw), return_tensors="pt")
with torch.no_grad():
image_features = clip_model.get_image_features(**image)
text_features = text_encoder(text).logits
# 归一化
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
# 计算余弦相似度 logit_scale是尺度系数
logit_scale = clip_model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print(np.around(probs, 3))
```
# Evaluation
### Zero-Shot Classification
| model | dataset | Top1 | Top5 |
| ---- | ---- | ---- | ---- |
| Taiyi-CLIP-Roberta-102M-Chinese | ImageNet1k-CN | 41.00% | 69.19% |
### Zero-Shot Text-to-Image Retrieval
| model | dataset | Top1 | Top5 | Top10 |
| ---- | ---- | ---- | ---- | ---- |
| Taiyi-CLIP-Roberta-102M-Chinese | Flickr30k-CNA-test | 44.06 % | 71.42% | 80.84% |
| Taiyi-CLIP-Roberta-102M-Chinese | COCO-CN-test | 46.30 % | 78.00% | 89.00% |
| Taiyi-CLIP-Roberta-102M-Chinese | wukong50k | 48.67 % | 81.77% | 90.09% |
# Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | 3,386 |
D3xter1922/electra-base-discriminator-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: electra-base-discriminator-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6824089073723449
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-cola
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6367
- Matthews Correlation: 0.6824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4139 | 1.0 | 535 | 0.4137 | 0.6381 |
| 0.2452 | 2.0 | 1070 | 0.4887 | 0.6504 |
| 0.17 | 3.0 | 1605 | 0.5335 | 0.6757 |
| 0.1135 | 4.0 | 2140 | 0.6367 | 0.6824 |
| 0.0817 | 5.0 | 2675 | 0.6742 | 0.6755 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 2,026 |
tals/albert-base-vitaminc | [
"NOT ENOUGH INFO",
"REFUTES",
"SUPPORTS"
] | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
| 2,357 |
AnReu/albert-for-math-ar-base-ft | null | # ALBERT for Math AR
This model is further pre-trained on the Mathematics StackExchange questions and answers. It is based on Albert base v2 and uses the same tokenizer. In addition to pre-training the model was finetuned on Math Question Answer Retrieval. The sequence classification head is trained to output a relevance score if you input the question as the first segment and the answer as the second segment. You can use the relevance score to rank different answers for retrieval.
## Usage
```python
# based on https://huggingface.co/docs/transformers/main/en/task_summary#sequence-classification
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = AutoModelForSequenceClassification.from_pretrained("AnReu/albert-for-math-ar-base-ft")
classes = ["non relevant", "relevant"]
sequence_0 = "How can I calculate x in $3x = 5$"
sequence_1 = "Just divide by 3: $x = \\frac{5}{3}$"
sequence_2 = "The general rule for squaring a sum is $(a+b)^2=a^2+2ab+b^2$"
# The tokenizer will automatically add any model specific separators (i.e. <CLS> and <SEP>) and tokens to
# the sequence, as well as compute the attention masks.
irrelevant = tokenizer(sequence_0, sequence_2, return_tensors="pt")
relevant = tokenizer(sequence_0, sequence_1, return_tensors="pt")
irrelevant_classification_logits = model(**irrelevant).logits
relevant_classification_logits = model(**relevant).logits
irrelevant_results = torch.softmax(irrelevant_classification_logits, dim=1).tolist()[0]
relevant_results = torch.softmax(relevant_classification_logits, dim=1).tolist()[0]
# Should be irrelevant
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(irrelevant_results[i] * 100))}%")
# Should be relevant
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(relevant_results[i] * 100))}%")
```
## Reference
If you use this model, please consider referencing our paper:
```bibtex
@inproceedings{reusch2021tu_dbs,
title={TU\_DBS in the ARQMath Lab 2021, CLEF},
author={Reusch, Anja and Thiele, Maik and Lehner, Wolfgang},
year={2021},
organization={CLEF}
}
```
| 2,183 |
moshew/bert-mini-sst2-distilled | [
"negative",
"positive"
] | Entry not found | 15 |
echarlaix/bert-large-uncased-whole-word-masking-finetuned-sst-2 | null | Entry not found | 15 |
google/tapas-small-finetuned-tabfact | null | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS small model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_small`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
``` | 4,870 |
rohanrajpal/bert-base-multilingual-codemixed-cased-sentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- hi
- en
tags:
- hi
- en
- codemix
license: "apache-2.0"
datasets:
- SAIL 2017
metrics:
- fscore
- accuracy
---
# BERT codemixed base model for hinglish (cased)
## Model description
Input for the model: Any codemixed hinglish text
Output for the model: Sentiment. (0 - Negative, 1 - Neutral, 2 - Positive)
I took a bert-base-multilingual-cased model from Huggingface and finetuned it on [SAIL 2017](http://www.dasdipankar.com/SAILCodeMixed.html) dataset.
Performance of this model on the SAIL 2017 dataset
| metric | score |
|------------|----------|
| acc | 0.588889 |
| f1 | 0.582678 |
| acc_and_f1 | 0.585783 |
| precision | 0.586516 |
| recall | 0.588889 |
## Intended uses & limitations
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import BertTokenizer, BertModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("rohanrajpal/bert-base-codemixed-uncased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("rohanrajpal/bert-base-codemixed-uncased-sentiment")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in *TensorFlow*:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('rohanrajpal/bert-base-codemixed-uncased-sentiment')
model = TFBertModel.from_pretrained("rohanrajpal/bert-base-codemixed-uncased-sentiment")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
#### Limitations and bias
Coming soon!
## Training data
I trained on the SAIL 2017 dataset [link](http://amitavadas.com/SAIL/Data/SAIL_2017.zip) on this [pretrained model](https://huggingface.co/bert-base-multilingual-cased).
## Training procedure
No preprocessing.
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{khanuja-etal-2020-gluecos,
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
author = "Khanuja, Simran and
Dandapat, Sandipan and
Srinivasan, Anirudh and
Sitaram, Sunayana and
Choudhury, Monojit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
pages = "3575--3585"
}
```
| 2,650 |
yoshitomo-matsubara/bert-base-uncased-qqp | null | ---
language: en
tags:
- bert
- qqp
- glue
- torchdistill
license: apache-2.0
datasets:
- qqp
metrics:
- f1
- accuracy
---
`bert-base-uncased` fine-tuned on QQP dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
| 827 |
juliensimon/distilbert-amazon-shoe-reviews | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-amazon-shoe-reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-amazon-shoe-reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9532
- Accuracy: 0.5779
- F1: [0.62616119 0.46456105 0.50993865 0.55755123 0.734375 ]
- Precision: [0.62757927 0.46676662 0.49148534 0.58430541 0.72415507]
- Recall: [0.6247495 0.46237624 0.52983172 0.53313982 0.74488753]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------------------------------------:|:--------------------------------------------------------:|
| 0.9713 | 1.0 | 2813 | 0.9532 | 0.5779 | [0.62616119 0.46456105 0.50993865 0.55755123 0.734375 ] | [0.62757927 0.46676662 0.49148534 0.58430541 0.72415507] | [0.6247495 0.46237624 0.52983172 0.53313982 0.74488753] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| 2,121 |
lgodwangl/sent_chineses | [
"negative",
"neutral",
"positive"
] | Entry not found | 15 |
Raychanan/chinese-roberta-wwm-ext-FineTuned | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Recognai/zeroshot_selectra_small | [
"contradiction",
"neutral",
"entailment"
] | ---
language: es
tags:
- zero-shot-classification
- nli
- pytorch
datasets:
- xnli
pipeline_tag: zero-shot-classification
license: apache-2.0
widget:
- text: "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo"
candidate_labels: "cultura, sociedad, economia, salud, deportes"
---
# Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
*Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier.
## Usage
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/zeroshot_selectra_medium")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""Output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['sociedad', 'cultura', 'salud', 'economia', 'deportes'],
'scores': [0.3711881935596466,
0.25650349259376526,
0.17355826497077942,
0.1641489565372467,
0.03460107371211052]}
"""
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Metrics
| Model | Params | XNLI (acc) | \*MLSUM (acc) |
| --- | --- | --- | --- |
| [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 |
| [zs SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium) | 41M | **0.807** | **0.589** |
| zs SELECTRA small | **22M** | 0.795 | 0.446 |
\*evaluated with zero-shot learning (ZSL)
- **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion.
- **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb)
## Training
Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
- Javier Lopez ([GitHub](https://github.com/javispp)) | 3,406 |
aloxatel/mbert | [
"LABEL_0",
"LABEL_1"
] | Entry not found | 15 |
cardiffnlp/bertweet-base-hate | null | 0 | |
huggingface/prunebert-base-uncased-6-finepruned-w-distil-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
airKlizz/xlm-roberta-base-germeval21-toxic-with-task-specific-pretraining | null | Entry not found | 15 |
marma/bert-base-swedish-cased-sentiment | [
"NEGATIVE",
"POSITIVE"
] | Experimental sentiment analysis based on ~20k of App Store reviews in Swedish.
### Usage
```python
from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='marma/bert-base-swedish-cased-sentiment')
>>> sa('Det här är ju fantastiskt!')
[{'label': 'POSITIVE', 'score': 0.9974609613418579}]
>>> sa('Den här appen suger!')
[{'label': 'NEGATIVE', 'score': 0.998340368270874}]
>>> sa('Det är fruktansvärt.')
[{'label': 'NEGATIVE', 'score': 0.998340368270874}]
>>> sa('Det är fruktansvärt bra.')
[{'label': 'POSITIVE', 'score': 0.998340368270874}]
``` | 573 |
prajjwal1/roberta-base-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Roberta-base trained on MNLI.
| Task | Accuracy |
|---------|----------|
| MNLI | 86.32 |
| MNLI-mm | 86.43 |
You can also check out:
- `prajjwal1/roberta-base-mnli`
- `prajjwal1/roberta-large-mnli`
- `prajjwal1/albert-base-v2-mnli`
- `prajjwal1/albert-base-v1-mnli`
- `prajjwal1/albert-large-v2-mnli`
[@prajjwal_1](https://twitter.com/prajjwal_1)
| 364 |
JminJ/kcElectra_base_Bad_Sentence_Classifier | [
"bad_sen",
"ok_sen"
] | # Bad_text_classifier
## Model 소개
인터넷 상에 퍼져있는 여러 댓글, 채팅이 민감한 내용인지 아닌지를 판별하는 모델을 공개합니다. 해당 모델은 공개데이터를 사용해 label을 수정하고 데이터들을 합쳐 구성해 finetuning을 진행하였습니다. 해당 모델이 언제나 모든 문장을 정확히 판단이 가능한 것은 아니라는 점 양해해 주시면 감사드리겠습니다.
```
NOTE)
공개 데이터의 저작권 문제로 인해 모델 학습에 사용된 변형된 데이터는 공개 불가능하다는 점을 밝힙니다.
또한 해당 모델의 의견은 제 의견과 무관하다는 점을 미리 밝힙니다.
```
## Dataset
### data label
* **0 : bad sentence**
* **1 : not bad sentence**
### 사용한 dataset
* [smilegate-ai/Korean Unsmile Dataset](https://github.com/smilegate-ai/korean_unsmile_dataset)
* [kocohub/Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
### dataset 가공 방법
기존 이진 분류가 아니였던 두 데이터를 이진 분류 형태로 labeling을 다시 해준 뒤, Korean HateSpeech Dataset중 label 1(not bad sentence)만을 추려 가공된 Korean Unsmile Dataset에 합쳐 주었습니다.
</br>
**Korean Unsmile Dataset에 clean으로 labeling 되어있던 데이터 중 몇개의 데이터를 0 (bad sentence)으로 수정하였습니다.**
* "~노"가 포함된 문장 중, "이기", "노무"가 포함된 데이터는 0 (bad sentence)으로 수정
* "좆", "봊" 등 성 관련 뉘앙스가 포함된 데이터는 0 (bad sentence)으로 수정
</br>
## Model Training
* huggingface transformers의 ElectraForSequenceClassification를 사용해 finetuning을 수행하였습니다.
* 한국어 공개 Electra 모델 중 3가지 모델을 사용해 각각 학습시켜주었습니다.
### use model
* [Beomi/KcELECTRA](https://github.com/Beomi/KcELECTRA)
* [monologg/koELECTRA](https://github.com/monologg/KoELECTRA)
* [tunib/electra-ko-base](https://huggingface.co/tunib/electra-ko-base)
## How to use model?
```PYTHON
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained('JminJ/kcElectra_base_Bad_Sentence_Classifier')
tokenizer = AutoTokenizer.from_pretrained('JminJ/kcElectra_base_Bad_Sentence_Classifier')
```
## Model Valid Accuracy
| mdoel | accuracy |
| ---------- | ---------- |
| kcElectra_base_fp16_wd_custom_dataset | 0.8849 |
| tunibElectra_base_fp16_wd_custom_dataset | 0.8726 |
| koElectra_base_fp16_wd_custom_dataset | 0.8434 |
```
Note)
모든 모델은 동일한 seed, learning_rate(3e-06), weight_decay lambda(0.001), batch_size(128)로 학습되었습니다.
```
## Contact
* jminju254@gmail.com
</br></br>
## Github
* https://github.com/JminJ/Bad_text_classifier
</br></br>
## Reference
* [Beomi/KcELECTRA](https://github.com/Beomi/KcELECTRA)
* [monologg/koELECTRA](https://github.com/monologg/KoELECTRA)
* [tunib/electra-ko-base](https://huggingface.co/tunib/electra-ko-base)
* [smilegate-ai/Korean Unsmile Dataset](https://github.com/smilegate-ai/korean_unsmile_dataset)
* [kocohub/Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
* [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://arxiv.org/abs/2003.10555)
| 2,598 |
nlptown/flaubert_small_cased_sentiment | [
"very_negative",
"negative",
"mixed",
"positive",
"very_positive"
] | ---
language:
- fr
datasets:
- amazon_reviews_multi
license: mit
---
# flaubert_small_cased_sentiment
This is a `flaubert_small_cased` model finetuned for sentiment analysis on product reviews in French. It predicts the sentiment of the review, from `very_negative` (1 star) to `very_positive` (5 stars).
This model is intended for direct use as a sentiment analysis model for French product reviews, or for further finetuning on related sentiment analysis tasks.
## Training data
The training data consists of the French portion of `amazon_reviews_multi`, supplemented with another 140,000 similar reviews.
## Accuracy
The finetuned model was evaluated on the French test set of `amazon_reviews_multi`.
- Accuracy (exact) is the exact match on the number of stars.
- Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer.
| Language | Accuracy (exact) | Accuracy (off-by-1) |
| -------- | ---------------------- | ------------------- |
| French | 61.56% | 95.66%
## Contact
[NLP Town](https://www.nlp.town) offers a suite of sentiment models for a wide range of languages, including an improved multilingual model through [RapidAPI](https://rapidapi.com/nlp-town-nlp-town-default/api/multilingual-sentiment-analysis2/).
Feel free to contact us for questions, feedback and/or requests for similar models. | 1,447 |
Hate-speech-CNERG/dehatebert-mono-arabic | [
"NON_HATE",
"HATE"
] | ---
language: ar
license: apache-2.0
---
This model is used detecting **hatespeech** in **Arabic language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.877609 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| 1,055 |
persiannlp/parsbert-base-parsinlu-multiple-choice | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- parsbert
- persian
- farsi
pipeline_tag: text-classification
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a parsbert-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from typing import List
import torch
from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer
model_name = "persiannlp/parsbert-base-parsinlu-multiple-choice"
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config)
def run_model(question: str, candicates: List[str]):
assert len(candicates) == 4, "you need four candidates"
choices_inputs = []
for c in candicates:
text_a = "" # empty context
text_b = question + " " + c
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=128,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
choices_inputs.append(inputs)
input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs])
output = model(input_ids=input_ids)
print(output)
return output
run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"])
run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"])
run_model(
question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ",
candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"])
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/ | 2,054 |
lewiswatson/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.918
- name: F1
type: f1
value: 0.9182094401352938
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9185
verified: true
- name: Precision Macro
type: precision
value: 0.8948630809230339
verified: true
- name: Precision Micro
type: precision
value: 0.9185
verified: true
- name: Precision Weighted
type: precision
value: 0.9190547804558933
verified: true
- name: Recall Macro
type: recall
value: 0.860108882009274
verified: true
- name: Recall Micro
type: recall
value: 0.9185
verified: true
- name: Recall Weighted
type: recall
value: 0.9185
verified: true
- name: F1 Macro
type: f1
value: 0.8727941247828231
verified: true
- name: F1 Micro
type: f1
value: 0.9185
verified: true
- name: F1 Weighted
type: f1
value: 0.9177368694234422
verified: true
- name: loss
type: loss
value: 0.21991275250911713
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2287
- Accuracy: 0.918
- F1: 0.9182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8478 | 1.0 | 250 | 0.3294 | 0.9015 | 0.8980 |
| 0.2616 | 2.0 | 500 | 0.2287 | 0.918 | 0.9182 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| 2,983 |
aychang/distilbert-base-cased-trec-coarse | [
"ABBR",
"DESC",
"ENTY",
"HUM",
"LOC",
"NUM"
] | ---
language:
- en
thumbnail:
tags:
- text-classification
license: mit
datasets:
- trec
metrics:
---
# TREC 6-class Task: distilbert-base-cased
## Model description
A simple base distilBERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/distilbert-base-cased-trec-coarse"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/distilbert-base-cased-trec-coarse"
texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC https://huggingface.co/datasets/trec
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=500,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.97,
'eval_f1': array([0.98220641, 0.91620112, 1. , 0.97709924, 0.98678414,
0.97560976]),
'eval_loss': 0.14275787770748138,
'eval_precision': array([0.96503497, 0.96470588, 1. , 0.96969697, 0.98245614,
0.96385542]),
'eval_recall': array([1. , 0.87234043, 1. , 0.98461538, 0.99115044,
0.98765432]),
'eval_runtime': 0.9731,
'eval_samples_per_second': 513.798}
```
| 2,332 |
maxpe/bertin-roberta-base-spanish_semeval18_emodetection | null | # BERTIN-roBERTa-base-Spanish_SemEval18_Emodetection
This is a BERTIN-roBERTa-base-Spanish model trained on ~3500 tweets in Spanish annotated for 11 emotion categories in [SemEval-2018 Task 1: Affect in Tweets: SubTask 5: Emotion Classification](https://competitions.codalab.org/competitions/17751).
Run the classifier on the test set of the competition:
```python
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModel
from torch.utils.data import DataLoader
import torch
import pandas as pd
# choose GPU when available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained("bertin-project/bertin-roberta-base-spanish",model_max_length=512)
# build custom model with classification layer on top and a dropout layer before
class RobertaClass(torch.nn.Module):
def __init__(self):
super(RobertaClass, self).__init__()
self.l1 = AutoModel.from_pretrained("bertin-project/bertin-roberta-base-spanish",return_dict=False)
self.l2 = torch.nn.Dropout(0.3)
self.l3 = torch.nn.Linear(768, 11)
def forward(self, input_ids, attention_mask):
_, output_1= self.l1(input_ids=input_ids, attention_mask=attention_mask)
output_2 = self.l2(output_1)
output = self.l3(output_2)
return output
model_name="bertin-roberta-base-spanish_semeval18_emodetection/pytorch_model.bin"
model=RobertaClass()
model.load_state_dict(torch.load(model_name,map_location=torch.device(device)))
model.eval()
# run on more than 1 GPU
model = torch.nn.DataParallel(model)
model.to(device)
twnames=['anger','anticipation','disgust','fear','joy','love','optimism','pessimism','sadness','surprise','trust']
# load from hugging face dataset hub
testset_raw = load_dataset('sem_eval_2018_task_1','subtask5.spanish',split='test')
# remove old columns
testset=testset_raw.remove_columns(twnames+["ID"])
# tokenize
testset_tokenized = testset.map(lambda e: tokenizer(e['Tweet'], truncation=True, padding='max_length'), batched=True)
testset_tokenized=testset_tokenized.remove_columns("Tweet")
testset_tokenized.set_format(type='torch', columns=['input_ids', 'attention_mask'])
outfile="predicted_2018-E-c-Es-test-gold.txt"
MAX_LEN = 512
VALID_BATCH_SIZE = 8
# set batch size according to available RAM
# VALID_BATCH_SIZE = 1000
# set num_workers for parallel processing
inference_params = {'batch_size': VALID_BATCH_SIZE,
'shuffle': False,
# 'num_workers': 1
}
inference_loader = DataLoader(testset_tokenized, **inference_params)
open(outfile,"w").close()
with torch.no_grad():
# change lines for progress manager
# for _, data in tqdm(enumerate(inference_loader, 0),total=len(inference_loader)):
for _, data in enumerate(inference_loader, 0):
outputs = model(input_ids=data['input_ids'],attention_mask=data['attention_mask'])
fin_outputs=torch.sigmoid(outputs).cpu().detach().numpy().tolist()
pd.DataFrame(fin_outputs).to_csv(outfile,index=False,header=False,sep="\t",mode='a')
# # dataset from file (one text per line)
# from datasets import Dataset
# with open(linesoftextfile,"rb") as textfile:
# textdict={"text":[x.decode().rstrip("\n") for x in textfile.readlines()]}
# inference_dataset=Dataset.from_dict(textdict)
# del(textdict)
``` | 3,391 |
DeepPavlov/roberta-large-winogrande | [
"False",
"True"
] | ---
language:
- en
datasets:
- winogrande
widget:
- text: "The roof of Rachel's home is old and falling apart, while Betty's is new. The home value of </s> Rachel is lower."
- text: "The wooden doors at my friends work are worse than the wooden desks at my work, because the </s> desks material is cheaper."
- text: "Postal Service were to reduce delivery frequency. </s> The postal service could deliver less frequently."
- text: "I put the cake away in the refrigerator. It has a lot of butter in it. </s> The cake has a lot of butter in it."
---
# RoBERTa Large model fine-tuned on Winogrande
This model was fine-tuned on Winogrande dataset (XL size) in sequence classification task format, meaning that original pairs of sentences
with corresponding options filled in were separated, shuffled and classified independently of each other.
## Model description
## Intended use & limitations
### How to use
## Training data
[WinoGrande-XL](https://huggingface.co/datasets/winogrande) reformatted the following way:
1. Each sentence was split on "`_`" placeholder symbol.
2. Each option was concatenated with the second part of the split, thus transforming each example into two text segment pairs.
3. Text segment pairs corresponding to correct and incorrect options were marked with `True` and `False` labels accordingly.
4. Text segment pairs were shuffled thereafter.
For example,
```json
{
"answer": "2",
"option1": "plant",
"option2": "urn",
"sentence": "The plant took up too much room in the urn, because the _ was small."
}
```
becomes
```json
{
"sentence1": "The plant took up too much room in the urn, because the ",
"sentence2": "plant was small.",
"label": false
}
```
and
```json
{
"sentence1": "The plant took up too much room in the urn, because the ",
"sentence2": "urn was small.",
"label": true
}
```
These sentence pairs are then treated as independent examples.
### BibTeX entry and citation info
```bibtex
@article{sakaguchi2019winogrande,
title={WinoGrande: An Adversarial Winograd Schema Challenge at Scale},
author={Sakaguchi, Keisuke and Bras, Ronan Le and Bhagavatula, Chandra and Choi, Yejin},
journal={arXiv preprint arXiv:1907.10641},
year={2019}
}
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 3,040 |
justin871030/bert-base-uncased-goemotions-ekman-finetuned | [
"anger",
"disgust",
"fear",
"joy",
"neutral",
"sadness",
"surprise"
] | ---
language: en
tags:
- go-emotion
- text-classification
- pytorch
datasets:
- go_emotions
metrics:
- f1
widget:
- text: "Thanks for giving advice to the people who need it! 👌🙏"
license: mit
---
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
| 499 |
navteca/quora-roberta-base | [
"LABEL_0"
] | ---
datasets:
- quora
language: en
license: mit
pipeline_tag: text-classification
tags:
- roberta
- text-classification
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [roberta-base](https://huggingface.co/roberta-base).
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset.
The model will predict a score between 0 and 1: How likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
The trained model can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
print(scores)
```
| 1,153 |
IDEA-CCNL/Erlangshen-Roberta-330M-NLI | [
"CONTRADICTION",
"NEUTRAL",
"ENTAILMENT"
] | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- NLI
inference: true
widget:
- text: "今天心情不好[SEP]今天很开心"
---
# Erlangshen-Roberta-330M-NLI, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
We collect 4 NLI(Natural Language Inference) datasets in the Chinese domain for finetune, with a total of 1014787 samples. Our model is mainly based on [roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large)
## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-330M-NLI')
model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-330M-NLI')
texta='今天的饭不好吃'
textb='今天心情不好'
output=model(torch.tensor([tokenizer.encode(texta,textb)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
```
## Scores on downstream chinese tasks (without any data augmentation)
| Model | cmnli | ocnli | snli |
| :--------: | :-----: | :----: | :-----: |
| Erlangshen-Roberta-110M-NLI | 80.83 | 78.56 | 88.01 |
| Erlangshen-Roberta-330M-NLI | 82.25 | 79.82 | 88 |
| Erlangshen-MegatronBert-1.3B-NLI | 84.52 | 84.17 | 88.67 |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | 1,576 |
Souvikcmsa/SentimentAnalysisDistillBERT | [
"negative",
"neutral",
"positive"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Souvikcmsa/autotrain-data-sentiment_analysis
co2_eq_emissions: 0.015536746909294205
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 762923432
- CO2 Emissions (in grams): 0.015536746909294205
## Validation Metrics
- Loss: 0.49825894832611084
- Accuracy: 0.7962895598399418
- Macro F1: 0.7997458031044901
- Micro F1: 0.7962895598399418
- Weighted F1: 0.796365325858282
- Macro Precision: 0.7995724418486833
- Micro Precision: 0.7962895598399418
- Weighted Precision: 0.7965384250324863
- Macro Recall: 0.8000290112564951
- Micro Recall: 0.7962895598399418
- Weighted Recall: 0.7962895598399418
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Souvikcmsa/autotrain-sentiment_analysis-762923432
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Souvikcmsa/autotrain-sentiment_analysis-762923432", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Souvikcmsa/autotrain-sentiment_analysis-762923432", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,440 |
MoritzLaurer/DeBERTa-v3-small-mnli-fever-docnli-ling-2c | [
"entailment",
"not_entailment"
] | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "I first thought that I liked the movie, but upon second thought the movie was actually disappointing. [SEP] The movie was good."
---
# DeBERTa-v3-small-mnli-fever-docnli-ling-2c
## Model description
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data.
The base model is [DeBERTa-v3-small from Microsoft](https://huggingface.co/microsoft/deberta-v3-small). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf) as well as the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543).
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/DeBERTa-v3-small-mnli-fever-docnli-ling-2c"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
### Training procedure
DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c
---------|----------|---------|----------|----------
0.927 | 0.921 | 0.892 | 0.684 | 0.673
## Limitations and bias
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. | 4,578 |
alperiox/autonlp-user-review-classification-536415182 | [
"CONTENT",
"INTERFACE",
"SUBSCRIPTION",
"USER_EXPERIENCE"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- alperiox/autonlp-data-user-review-classification
co2_eq_emissions: 1.268309634217171
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 536415182
- CO2 Emissions (in grams): 1.268309634217171
## Validation Metrics
- Loss: 0.44733062386512756
- Accuracy: 0.8873239436619719
- Macro F1: 0.8859416445623343
- Micro F1: 0.8873239436619719
- Weighted F1: 0.8864646766540891
- Macro Precision: 0.8848522167487685
- Micro Precision: 0.8873239436619719
- Weighted Precision: 0.8883299798792756
- Macro Recall: 0.8908045977011494
- Micro Recall: 0.8873239436619719
- Weighted Recall: 0.8873239436619719
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alperiox/autonlp-user-review-classification-536415182
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,441 |
boychaboy/SNLI_roberta-base | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
textattack/distilbert-base-uncased-ag-news | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ## TextAttack Model CardThis `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9478947368421052, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 630 |
anahitapld/DABert | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",
"LABEL_3",
"LABEL_30",
"LABEL_31",
"LABEL_32",
"LABEL_33",
"LABEL_34",
"LABEL_35",
"LABEL_36",
"LABEL_37",
"LABEL_38",
"LABEL_39",
"LABEL_4",
"LABEL_40",
"LABEL_41",
"LABEL_42",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
---
| 28 |
csatapathy/interview-ratings-bert | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
persiannlp/wikibert-base-parsinlu-multiple-choice | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- wikibert
- persian
- farsi
pipeline_tag: text-classification
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a wikibert-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from typing import List
import torch
from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer
model_name = "persiannlp/wikibert-base-parsinlu-multiple-choice"
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config)
def run_model(question: str, candicates: List[str]):
assert len(candicates) == 4, "you need four candidates"
choices_inputs = []
for c in candicates:
text_a = "" # empty context
text_b = question + " " + c
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=128,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
choices_inputs.append(inputs)
input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs])
output = model(input_ids=input_ids)
print(output)
return output
run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"])
run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"])
run_model(
question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ",
candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"])
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/ | 2,054 |
austinmw/distilbert-base-uncased-finetuned-health_facts | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- health_fact
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-health_facts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: health_fact
type: health_fact
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.628500823723229
- name: F1
type: f1
value: 0.6544946803476833
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-health_facts
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the health_fact dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1227
- Accuracy: 0.6285
- F1: 0.6545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1367 | 1.0 | 154 | 0.9423 | 0.5560 | 0.6060 |
| 0.9444 | 2.0 | 308 | 0.9267 | 0.5733 | 0.6170 |
| 0.8248 | 3.0 | 462 | 0.9483 | 0.5832 | 0.6256 |
| 0.7213 | 4.0 | 616 | 1.0119 | 0.5815 | 0.6219 |
| 0.608 | 5.0 | 770 | 1.1227 | 0.6285 | 0.6545 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 2,052 |
blanchefort/rubert-base-cased-sentiment-med | [
"NEUTRAL",
"POSITIVE",
"NEGATIVE"
] | ---
language:
- ru
tags:
- sentiment
- text-classification
---
# RuBERT for Sentiment Analysis of Medical Reviews
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on corpus of medical reviews.
## Labels
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-med')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-med', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[Отзывы о медучреждениях](https://github.com/blanchefort/datasets/tree/master/medical_comments)**
> Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта prodoctorov.ru
| 1,276 |
baykenney/bert-large-gpt2detector-random | [
"Human",
"Machine"
] | Entry not found | 15 |
persiannlp/wikibert-base-parsinlu-entailment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- wikibert
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
labels = ["entails", "contradicts", "neutral"]
model_name_or_path = "persiannlp/wikibert-base-parsinlu-entailment"
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
model_predict(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
model_predict(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
model_predict(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
| 1,639 |
uclanlp/plbart-java-clone-detection | null | Entry not found | 15 |
afbudiman/indobert-classification | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- f1
model-index:
- name: indobert-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9396825396825397
- name: F1
type: f1
value: 0.9393057427148881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-classification
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3707
- Accuracy: 0.9397
- F1: 0.9393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2458 | 1.0 | 688 | 0.2229 | 0.9325 | 0.9323 |
| 0.1258 | 2.0 | 1376 | 0.2332 | 0.9373 | 0.9369 |
| 0.059 | 3.0 | 2064 | 0.3389 | 0.9365 | 0.9365 |
| 0.0268 | 4.0 | 2752 | 0.3412 | 0.9421 | 0.9417 |
| 0.0097 | 5.0 | 3440 | 0.3707 | 0.9397 | 0.9393 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,999 |
rmihaylov/roberta-base-sentiment-bg | null | ---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# ROBERTA BASE (cased) trained on private Bulgarian sentiment-analysis dataset
This is a Multilingual Roberta model.
This model is cased: it does make a difference between bulgarian and Bulgarian.
### How to use
Here is how to use this model in PyTorch:
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>>
>>> model_id = "rmihaylov/roberta-base-sentiment-bg"
>>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>>
>>> inputs = tokenizer.batch_encode_plus(['Това е умно.', 'Това е тъпо.'], return_tensors='pt')
>>> outputs = model(**inputs)
>>> torch.softmax(outputs, dim=1).tolist()
[[0.0004746630438603461, 0.9995253086090088],
[0.9986956715583801, 0.0013043134240433574]]
```
| 905 |
deepgai/tweet_eval-sentiment-finetuned | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: tweet_eval-sentiment-finetuned
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: tweeteval
type: tweeteval
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7099
- name: f1
type: f1
value: 0.7097
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_eval-sentiment-finetuned
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the Tweet_Eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6532
- Accuracy: 0.744
- F1: 0.7437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7491 | 1.0 | 357 | 0.6089 | 0.7345 | 0.7314 |
| 0.5516 | 2.0 | 714 | 0.5958 | 0.751 | 0.7516 |
| 0.4618 | 3.0 | 1071 | 0.6131 | 0.748 | 0.7487 |
| 0.4066 | 4.0 | 1428 | 0.6532 | 0.744 | 0.7437 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,988 |
waboucay/camembert-large-finetuned-repnum_wl_3_classes | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 79.4 | 79.4 |
| test | 80.6 | 80.6 |
| 367 |
Greg1901/BertSummaDev_summariser | null | Entry not found | 15 |
cardiffnlp/twitter-roberta-base-stance-atheism | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | 0 | |
ncduy/phobert-large-finetuned-vietnamese_students_feedback | [
"negative",
"neutral",
"positive"
] | ---
tags:
- generated_from_trainer
datasets:
- vietnamese_students_feedback
metrics:
- accuracy
model-index:
- name: phobert-large-finetuned-vietnamese_students_feedback
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: vietnamese_students_feedback
type: vietnamese_students_feedback
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9463044851547694
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-large-finetuned-vietnamese_students_feedback
This model is a fine-tuned version of [vinai/phobert-large](https://huggingface.co/vinai/phobert-large) on the vietnamese_students_feedback dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Accuracy: 0.9463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 477 | 0.2088 | 0.9375 |
| 0.3231 | 2.0 | 954 | 0.2463 | 0.9444 |
| 0.1805 | 3.0 | 1431 | 0.2285 | 0.9463 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,882 |
danielhou13/longformer-finetuned_papers | null | Entry not found | 15 |
CogComp/bart-faithful-summary-detector | [
"FAITHFUL",
"HALLUCINATED"
] | ---
language:
- en
thumbnail: https://cogcomp.seas.upenn.edu/images/logo.png
tags:
- text-classification
- bart
- xsum
license: cc-by-sa-4.0
datasets:
- xsum
widget:
- text: "<s> Ban Ki-moon was elected for a second term in 2007. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
- text: "<s> Ban Ki-moon was elected for a second term in 2011. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
---
# bart-faithful-summary-detector
## Model description
A BART (base) model trained to classify whether a summary is *faithful* to the original article. See our [paper in NAACL'21](https://www.seas.upenn.edu/~sihaoc/static/pdf/CZSR21.pdf) for details.
## Usage
Concatenate a summary and a source document as input (note that the summary needs to be the **first** sentence).
Here's an example usage (with PyTorch)
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CogComp/bart-faithful-summary-detector")
model = AutoModelForSequenceClassification.from_pretrained("CogComp/bart-faithful-summary-detector")
article = "Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
bad_summary = "Ban Ki-moon was elected for a second term in 2007."
good_summary = "Ban Ki-moon was elected for a second term in 2011."
bad_pair = tokenizer(text=bad_summary, text_pair=article, return_tensors='pt')
good_pair = tokenizer(text=good_summary, text_pair=article, return_tensors='pt')
bad_score = model(**bad_pair)
good_score = model(**good_pair)
print(good_score[0][:, 1] > bad_score[0][:, 1]) # True, label mapping: "0" -> "Hallucinated" "1" -> "Faithful"
```
### BibTeX entry and citation info
```bibtex
@inproceedings{CZSR21,
author = {Sihao Chen and Fan Zhang and Kazoo Sone and Dan Roth},
title = {{Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection}},
booktitle = {NAACL},
year = {2021}
}
``` | 2,159 |
TransQuest/monotransquest-hter-en_any | [
"LABEL_0"
] | ---
language: en-multilingual
tags:
- Quality Estimation
- monotransquest
- HTER
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_any", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
``` | 5,411 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.