modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
boychaboy/MNLI_distilbert-base-cased | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
cardiffnlp/bertweet-base-stance-atheism | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | 0 | |
JacopoBandoni/BioBertRelationGenesDiseases | null | ---
license: afl-3.0
widget:
- text: "The case of a 72-year-old male with @DISEASE$ with poor insulin control (fasting hyperglycemia greater than 180 mg/dl) who had a long-standing polyuric syndrome is here presented. Hypernatremia and plasma osmolality elevated together with a low urinary osmolality led to the suspicion of diabetes insipidus, which was subsequently confirmed by the dehydration test and the administration of @GENE$ sc."
example_title: "Example 1"
- text: "Hypernatremia and plasma osmolality elevated together with a low urinary osmolality led to the suspicion of diabetes insipidus, which was subsequently confirmed by the dehydration test and the administration of @GENE$ sc. With 61% increase in the calculated urinary osmolarity one hour post desmopressin s.c., @DISEASE$ was diagnosed."
example_title: "Example 2"
---
The following is a fine-tuning of the BioBert models on the GAD dataset.
The model works by masking the gene string with "@GENE$" and the disease string with "@DISEASE$".
The output is a text classification that can either be:
- "LABEL0" if there is no relation
- "LABEL1" if there is a relation. | 1,147 |
Sreevishnu/funnel-transformer-small-imdb | [
"neg",
"pos"
] | ---
license: apache-2.0
language: en
widget:
- text: "In the garden of wonderment that is the body of work by the animation master Hayao Miyazaki, his 2001 gem 'Spirited Away' is at once one of his most accessible films to a Western audience and the one most distinctly rooted in Japanese culture and lore. The tale of Chihiro, a 10 year old girl who resents being moved away from all her friends, only to find herself working in a bathhouse for the gods, doesn't just use its home country's fraught relationship with deities as a backdrop. Never remotely didactic, the film is ultimately a self-fulfilment drama that touches on religious, ethical, ecological and psychological issues.
It's also a fine children's film, the kind that elicits a deepening bond across repeat viewings and the passage of time, mostly because Miyazaki refuses to talk down to younger viewers. That's been a constant in all of his filmography, but it's particularly conspicuous here because the stakes for its young protagonist are bigger than in most of his previous features aimed at younger viewers. It involves conquering fears and finding oneself in situations where safety is not a given.
There are so many moving parts in Spirited Away, from both a thematic and technical point of view, that pinpointing what makes Spirited Away stand out from an already outstanding body of work becomes as challenging as a meeting with Yubaba. But I think it comes down to an ability to deal with heady, complex subject matter from a young girl's perspective without diluting or lessening its resonance. Miyazaki has made a loopy, demanding work of art that asks your inner child to come out and play. There are few high-wire acts in all of movie-dom as satisfying as that."
datasets:
- imdb
tags:
- sentiment-analysis
---
# Funnel Transformer small (B4-4-4 with decoder) fine-tuned on IMDB for Sentiment Analysis
These are the model weights for the Funnel Transformer small model fine-tuned on the IMDB dataset for performing Sentiment Analysis with `max_position_embeddings=1024`.
The original model weights for English language are from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small) and it uses a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference between english and English.
## Fine-tuning Results
| | Accuracy | Precision | Recall | F1 |
|-------------------------------|----------|-----------|----------|----------|
| funnel-transformer-small-imdb | 0.956530 | 0.952286 | 0.961075 | 0.956661 |
## Model description (from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small))
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained(
"Sreevishnu/funnel-transformer-small-imdb",
use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(
"Sreevishnu/funnel-transformer-small-imdb",
num_labels=2,
max_position_embeddings=1024)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
# Example App
https://lazy-film-reviews-7gif2bz4sa-ew.a.run.app/
Project repo: https://github.com/akshaydevml/lazy-film-reviews | 4,520 |
dinalzein/xlm-roberta-base-finetuned-language-identification | [
"ar",
"bg",
"de",
"el",
"en",
"es",
"fr",
"hi",
"it",
"ja",
"nl",
"pl",
"pt",
"ru",
"sw",
"th",
"tr",
"ur",
"vi",
"zh"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-finetuned-language-identification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-language-detection-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification dataset](https://huggingface.co/datasets/papluca/language-identification).
It achieves the following results on the evaluation set:
- Loss: 0.0436
- Accuracy: 0.9959
## Model description
The model used in this task is XLM-RoBERTa, a transformer model with a classification head on top.
## Intended uses & limitations
It identifies the language a document is written in and it supports 20 different langauges:
Arabic (ar), Bulgarian (bg), German (de), Modern greek (el), English (en), Spanish (es), French (fr), Hindi (hi), Italian (it), Japanese (ja), Dutch (nl), Polish (pl), Portuguese (pt), Russian (ru), Swahili (sw), Thai (th), Turkish (tr), Urdu (ur), Vietnamese (vi), Chinese (zh)
## Training and evaluation data
The model is fine-tuned on the [Language Identification dataset](https://huggingface.co/datasets/papluca/language-identification), a corpus consists of text from 20 different languages. The dataset is split with 7000 sentences for training, 1000 for validating, and 1000 for testing. The accuracy on the test set is 99.5%.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0493 | 1.0 | 35000 | 0.0407 | 0.9955 |
| 0.018 | 2.0 | 70000 | 0.0436 | 0.9959 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 2,234 |
cwkeam/m-ctc-t-large-sequence-lid | [
"ab",
"ar",
"as",
"br",
"ca",
"cnh",
"cs",
"cv",
"cy",
"de",
"dv",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy-NL",
"ga-IE",
"hi",
"hsb",
"hu",
"ia",
"id",
"it",
"ja",
"ka",
"kab",
"ky",
"lg",
"lt",
"lv",
"mn",
"mt",
"nl",
"or",
"pa-IN",
"pl",
"pt",
"rm-sursilv",
"rm-vallader",
"ro",
"ru",
"rw",
"sah",
"sl",
"sv-SE",
"ta",
"th",
"tr",
"tt",
"uk",
"vi",
"vot",
"zh-CN",
"zh-HK",
"zh-TW"
] | ---
language: en
datasets:
- librispeech_asr
- common_voice
tags:
- speech
license: apache-2.0
---
# M-CTC-T
Massively multilingual speech recognizer from Meta AI. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.

The original Flashlight code, model checkpoints, and Colab notebook can be found at https://github.com/flashlight/wav2letter/tree/main/recipes/mling_pl .
## Citation
[Paper](https://arxiv.org/abs/2111.00161)
Authors: Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert
```
@article{lugosch2021pseudo,
title={Pseudo-Labeling for Massively Multilingual Speech Recognition},
author={Lugosch, Loren and Likhomanenko, Tatiana and Synnaeve, Gabriel and Collobert, Ronan},
journal={ICASSP},
year={2022}
}
```
Additional thanks to [Chan Woo Kim](https://huggingface.co/cwkeam) and [Patrick von Platen](https://huggingface.co/patrickvonplaten) for porting the model from Flashlight to PyTorch.
# Training method
 TO-DO: replace with the training diagram from paper
For more information on how the model was trained, please take a look at the [official paper](https://arxiv.org/abs/2111.00161).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import MCTCTForCTC, MCTCTProcessor
model = MCTCTForCTC.from_pretrained("speechbrain/mctct-large")
processor = MCTCTProcessor.from_pretrained("speechbrain/mctct-large")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_features = processor(ds[0]["audio"]["array"], return_tensors="pt").input_features
# retrieve logits
logits = model(input_features).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
Results for Common Voice, averaged over all languages:
*Character error rate (CER)*:
| Valid | Test |
|-------|------|
| 21.4 | 23.3 |
| 2,741 |
p-christ/QandAClassifier | [
"ACCEPTED",
"REJECTED"
] | Entry not found | 15 |
IMSyPP/hate_speech_nl | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
language:
- nl
license: mit
---
# Hate Speech Classifier for Social Media Content in Dutch
A monolingual model for hate speech classification of social media content in Dutch. The model was trained on 20000 social media posts (youtube, twitter, facebook) and tested on an independent test set of 2000 posts. It is based on thepre-trained language model [BERTje](https://huggingface.co/wietsedv/bert-base-dutch-cased).
## Tokenizer
During training the text was preprocessed using the BERTje tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | 716 |
NbAiLab/nb-bert-base-samisk | null | ---
license: apache-2.0
---
| 31 |
TehranNLP-org/bert-base-uncased-cls-hatexplain | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
classla/sloberta-frenk-hate | null | ---
language: "sl"
tags:
- text-classification
- hate-speech
widget:
- text: "Silva, ti si grda in neprijazna"
---
Text classification model based on `EMBEDDIA/sloberta` and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the slovenian subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
## Fine-tuning hyperparameters
Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
```python
model_args = {
"num_train_epochs": 14,
"learning_rate": 1e-5,
"train_batch_size": 21,
}
```
## Performance
The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
| model | average accuracy | average macro F1|
|---|---|---|
|sloberta-frenk-hate|0.7785|0.7764|
|EMBEDDIA/crosloengual-bert |0.7616|0.7585|
|xlm-roberta-base |0.686|0.6827|
|fasttext|0.709 |0.701 |
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with `crosloengual-bert`:
| test | accuracy p-value | macro F1 p-value|
| --- | --- | --- |
|Wilcoxon|0.00781|0.00781|
|Mann Whithney U test|0.00163|0.00108|
|Student t-test |0.000101|3.95e-05|
Comparison with `xlm-roberta-base`:
| test | accuracy p-value | macro F1 p-value|
| --- | --- | --- |
|Wilcoxon|0.00781|0.00781|
|Mann Whithney U test|0.00108|0.00108|
|Student t-test |9.46e-11|6.94e-11|
## Use examples
```python
from simpletransformers.classification import ClassificationModel
model_args = {
"num_train_epochs": 6,
"learning_rate": 3e-6,
"train_batch_size": 69}
model = ClassificationModel(
"camembert", "5roop/sloberta-frenk-hate", use_cuda=True,
args=model_args
)
predictions, logit_output = model.predict(["Silva, ti si grda in neprijazna", "Naša hiša ima dimnik"])
predictions
### Output:
### array([1, 0])
```
## Citation
If you use the model, please cite the following paper on which the original model is based:
```
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
and the dataset used for fine-tuning:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
``` | 3,464 |
lewtun/minilm-finetuned-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: minilm-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9117582218338629
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# minilm-finetuned-emotion
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3891
- F1: 0.9118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.3957 | 1.0 | 250 | 1.0134 | 0.6088 |
| 0.8715 | 2.0 | 500 | 0.6892 | 0.8493 |
| 0.6085 | 3.0 | 750 | 0.4943 | 0.8920 |
| 0.4626 | 4.0 | 1000 | 0.4096 | 0.9078 |
| 0.3961 | 5.0 | 1250 | 0.3891 | 0.9118 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.6.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,862 |
inovex/multi2convai-corona-de-bert | [
"corona.traffic",
"corona.supplies",
"corona.quarantine",
"corona.masks",
"corona.illness",
"corona.package",
"corona.vaccine",
"corona.rumors",
"corona.risk",
"corona.course",
"corona.symptoms",
"corona.patients",
"corona.deathRate",
"corona.infect",
"corona.protect",
"corona.definition",
"neo.feeling",
"neo.hello",
"neo.introduce",
"neo.help",
"corona.ibuprofen",
"neo.sucks",
"neo.joke",
"neo.thanks",
"neo.wyd",
"neo.yes",
"neo.no",
"neo.report",
"neo.sorry",
"neo.age",
"neo.home",
"corona.warn-app",
"corona.test",
"corona.contact",
"corona.event",
"corona.fahrradpruefung",
"corona.leisure",
"corona.notbetreuung",
"corona.travel",
"regio.taxes.help",
"undefined"
] | ---
tags:
- text-classification
- pytorch
- transformers
widget:
- text: "Muss ich eine Maske tragen?"
license: mit
language: de
---
# Multi2ConvAI-Corona: finetuned Bert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai | 1,002 |
navteca/nli-deberta-v3-large | [
"contradiction",
"entailment",
"neutral"
] | ---
datasets:
- multi_nli
- snli
language: en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-large
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 92.20
- Accuracy on MNLI mismatched set: 90.49
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-large')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-large')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-large')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-large')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` | 2,781 |
tdrenis/finetuned-bot-detector | null | Student project that fine-tuned the roberta-base-openai-detector model on the Twibot-20 dataset. | 96 |
ChrisLiewJY/BERTweet-Hedge | null | ---
license: mit
language:
- en
tags:
- uncertainty-detection
- social-media
- text-classification
widget:
- text: "It seems like Bitcoin prices are heading into bearish territory."
example_title: "Hedge Detection (Positive - Label 1)"
- text: "Bitcoin prices have fallen by 42% in the last 30 days."
example_title: "Hedge Detection (Negative - Label 0)"
---
### Overview
Fine tuned VinAI's BERTweet base model on the Wiki Weasel 2.0 Corpus from the [Szeged Uncertainty Corpus](https://rgai.inf.u-szeged.hu/node/160) for hedge (linguistic uncertainty) detection in social media texts. Model was trained and optimised using Ray Tune's implementation of Deep Mind's Population Based Training with the arithmetic mean of Accuracy & F1 as its evaluation metric.
### Labels
* LABEL_1 = Positive (Hedge is detected within text)
* LABEL_0 = Negative (No Hedges detected within text)
### <a name="models2"></a> Model Performance
Model | Accuracy | F1-Score | Accuracy & F1-Score
---|---|---|---
`BERTweet-Hedge` | 0.9680 | 0.8765 | 0.9222
| 1,041 |
SetFit/distilbert-base-uncased__enron_spam__all-train | [
"ham",
"spam"
] | Entry not found | 15 |
Tatyana/rubert_conversational_cased_sentiment | null | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- Tatyana/ru_sentiment_dataset
---
# Keras model with ruBERT conversational embedder for Sentiment Analysis
Russian texts sentiment classification.
Model trained on [Tatyana/ru_sentiment_dataset](https://huggingface.co/datasets/Tatyana/ru_sentiment_dataset)
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
!pip install tensorflow-gpu
!pip install deeppavlov
!python -m deeppavlov install squad_bert
!pip install fasttext
!pip install transformers
!python -m deeppavlov install bert_sentence_embedder
from deeppavlov import build_model
model = build_model(Tatyana/rubert_conversational_cased_sentiment/custom_config.json)
model(["Сегодня хорошая погода", "Я счастлив проводить с тобою время", "Мне нравится эта музыкальная композиция"])
```
| 860 |
boychaboy/SNLI_roberta-large | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
fergusq/finbert-finnsentiment | [
"NEGATIVE",
"NEUTRAL",
"POSITIVE"
] | ---
language: fi
---
# FinBERT fine-tuned with the FinnSentiment dataset
This is a FinBERT model fine-tuned with the [FinnSentiment dataset](https://arxiv.org/pdf/2012.02613.pdf).
| 182 |
wanyu/IteraTeR-ROBERTA-Intention-Classifier | [
"clarity",
"coherence",
"fluency",
"meaning-changed",
"style"
] | ---
datasets:
- IteraTeR_full_sent
---
# IteraTeR RoBERTa model
This model was obtained by fine-tuning [roberta-large](https://huggingface.co/roberta-large) on [IteraTeR-human-sent](https://huggingface.co/datasets/wanyu/IteraTeR_human_sent) dataset.
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) <br>
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
## Edit Intention Prediction Task
Given a pair of original sentence and revised sentence, our model can predict the edit intention for this revision pair.<br>
More specifically, the model will predict the probability of the following edit intentions:
<table>
<tr>
<th>Edit Intention</th>
<th>Definition</th>
<th>Example</th>
</tr>
<tr>
<td>clarity</td>
<td>Make the text more formal, concise, readable and understandable.</td>
<td>
Original: It's like a house which anyone can enter in it. <br>
Revised: It's like a house which anyone can enter.
</td>
</tr>
<tr>
<td>fluency</td>
<td>Fix grammatical errors in the text.</td>
<td>
Original: In the same year he became the Fellow of the Royal Society. <br>
Revised: In the same year, he became the Fellow of the Royal Society.
</td>
</tr>
<tr>
<td>coherence</td>
<td>Make the text more cohesive, logically linked and consistent as a whole.</td>
<td>
Original: Achievements and awards Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy. <br>
Revised: Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy.
</td>
</tr>
<tr>
<td>style</td>
<td>Convey the writer’s writing preferences, including emotions, tone, voice, etc..</td>
<td>
Original: She was last seen on 2005-10-22. <br>
Revised: She was last seen on October 22, 2005.
</td>
</tr>
<tr>
<td>meaning-changed</td>
<td>Update or add new information to the text.</td>
<td>
Original: This method improves the model accuracy from 64% to 78%. <br>
Revised: This method improves the model accuracy from 64% to 83%.
</td>
</tr>
</table>
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("wanyu/IteraTeR-ROBERTA-Intention-Classifier")
model = AutoModelForSequenceClassification.from_pretrained("wanyu/IteraTeR-ROBERTA-Intention-Classifier")
id2label = {0: "clarity", 1: "fluency", 2: "coherence", 3: "style", 4: "meaning-changed"}
before_text = 'I likes coffee.'
after_text = 'I like coffee.'
model_input = tokenizer(before_text, after_text, return_tensors='pt')
model_output = model(**model_input)
softmax_scores = torch.softmax(model_output.logits, dim=-1)
pred_id = torch.argmax(softmax_scores)
pred_label = id2label[pred_id.int()]
``` | 2,927 |
UT/BMW | null | Entry not found | 15 |
jonas/sdg_classifier_osdg | [
"1",
"10",
"11",
"12",
"13",
"14",
"15",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9"
] | ---
language: en
widget:
- text: "Ending all forms of discrimination against women and girls is not only a basic human right, but it also crucial to accelerating sustainable development. It has been proven time and again, that empowering women and girls has a multiplier effect, and helps drive up economic growth and development across the board.
Since 2000, UNDP, together with our UN partners and the rest of the global community, has made gender equality central to our work. We have seen remarkable progress since then. More girls are now in school compared to 15 years ago, and most regions have reached gender parity in primary education. Women now make up to 41 percent of paid workers outside of agriculture, compared to 35 percent in 1990."
datasets:
- jonas/osdg_sdg_data_processed
co2_eq_emissions: 0.0653263174784986
---
# About
Machine Learning model for classifying text according to the first 15 of the 17 Sustainable Development Goals from the United Nations. Note that model is trained on quite short paragraphs (around 100 words) and performs best with similar input sizes.
Data comes from the amazing https://osdg.ai/ community!
# Model Training Specifics
- Problem type: Multi-class Classification
- Model ID: 900229515
- CO2 Emissions (in grams): 0.0653263174784986
## Validation Metrics
- Loss: 0.3644874095916748
- Accuracy: 0.8972544579677328
- Macro F1: 0.8500873710954522
- Micro F1: 0.8972544579677328
- Weighted F1: 0.8937529692986061
- Macro Precision: 0.8694369727467804
- Micro Precision: 0.8972544579677328
- Weighted Precision: 0.8946984684977016
- Macro Recall: 0.8405065997404059
- Micro Recall: 0.8972544579677328
- Weighted Recall: 0.8972544579677328
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/jonas/autotrain-osdg-sdg-classifier-900229515
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("jonas/sdg_classifier_osdg", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("jonas/sdg_classifier_osdg", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 2,365 |
tezign/Erlangshen-Sentiment-FineTune | null | ---
language: zh
tags:
- sentiment-analysis
- pytorch
widget:
- text: "房间非常非常小,内窗,特别不透气,因为夜里走廊灯光是亮的,内窗对着走廊,窗帘又不能完全拉死,怎么都会有一道光射进来。"
- text: "尽快有洗衣房就好了。"
- text: "很好,干净整洁,交通方便。"
- text: "干净整洁很好"
---
# Note
BERT based sentiment analysis, finetune based on https://huggingface.co/IDEA-CCNL/Erlangshen-Roberta-330M-Sentiment .
The model trained on **hotel human review chinese dataset**.
# Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
MODEL = "tezign/Erlangshen-Sentiment-FineTune"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL, trust_remote_code=True)
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
result = classifier("很好,干净整洁,交通方便。")
print(result)
"""
print result
>> [{'label': 'Positive', 'score': 0.989660382270813}]
"""
```
# Evaluate
We compared and evaluated the performance of **Our finetune model** and the **Original Erlangshen model** on the **hotel human review test dataset**(5429 negative reviews and 1251 positive reviews).
The results showed that our model substantial improved the precision and recall of positive reviews:
```text
Our finetune model:
precision recall f1-score support
Negative 0.99 0.98 0.98 5429
Positive 0.92 0.95 0.93 1251
accuracy 0.97 6680
macro avg 0.95 0.96 0.96 6680
weighted avg 0.97 0.97 0.97 6680
======================================================
Original Erlangshen model:
precision recall f1-score support
Negative 0.81 1.00 0.90 5429
Positive 0.00 0.00 0.00 1251
accuracy 0.81 6680
macro avg 0.41 0.50 0.45 6680
weighted avg 0.66 0.81 0.73 6680
``` | 1,988 |
ReynaQuita/twitter_disaster_bert_large | null | Entry not found | 15 |
abhishek/autonlp-japanese-sentiment-59362 | [
"negative",
"positive"
] | ---
tags: autonlp
language: ja
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-japanese-sentiment
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 59362
## Validation Metrics
- Loss: 0.13092292845249176
- Accuracy: 0.9527127414314258
- Precision: 0.9634070704982427
- Recall: 0.9842171959602166
- AUC: 0.9667289746092403
- F1: 0.9737009564152002
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-japanese-sentiment-59362
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-japanese-sentiment-59362", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-japanese-sentiment-59362", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,096 |
finiteautomata/bert-contextualized-hate-speech-es | [
"Hateful",
"Not hateful"
] | Entry not found | 15 |
google/tapas-large-finetuned-tabfact | null | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS large model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_large`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
``` | 4,870 |
nateraw/codecarbon-text-classification | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: codecarbon-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codecarbon-text-classification
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,067 |
ykacer/bert-base-cased-imdb-sequence-classification | null |
---
language:
- en
thumbnail: https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png
tags:
- sequence
- classification
license: apache-2.0
datasets:
- imdb
metrics:
- accuracy
---
| 213 |
rasta/distilbert-base-uncased-finetuned-fashion | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-fashion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-fashion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a munally created dataset in order to detect fashion (label_0) from non-fashion (label_1) items.
It achieves the following results on the evaluation set:
- Loss: 0.0809
- Accuracy: 0.98
- F1: 0.9801
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4017 | 1.0 | 47 | 0.1220 | 0.966 | 0.9662 |
| 0.115 | 2.0 | 94 | 0.0809 | 0.98 | 0.9801 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,394 |
tinkoff-ai/response-quality-classifier-base | [
"relevance",
"specificity"
] | ---
license: mit
widget:
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]супер, вот только проснулся, у тебя как?"
example_title: "Dialog example 1"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм"
example_title: "Dialog example 2"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?"
example_title: "Dialog example 3"
language:
- ru
tags:
- conversational
---
This classification model is based on [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence).
The model should be used to produce relevance and specificity of the last message in the context of a dialogue.
The labels explanation:
- `relevance`: is the last message in the dialogue relevant in the context of the full dialogue.
- `specificity`: is the last message in the dialogue interesting and promotes the continuation of the dialogue.
It is pretrained on a large corpus of dialog data in unsupervised manner: the model is trained to predict whether last response was in a real dialog, or it was pulled from some other dialog at random.
Then it was finetuned on manually labelled examples (dataset will be posted soon).
The model was trained with three messages in the context and one response. Each message was tokenized separately with ``` max_length = 32 ```.
The performance of the model on validation split (dataset will be posted soon) (with the best thresholds for validation samples):
| | threshold | f0.5 | ROC AUC |
|:------------|------------:|-------:|----------:|
| relevance | 0.49 | 0.84 | 0.79 |
| specificity | 0.53 | 0.83 | 0.83 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-quality-classifier-base')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-quality-classifier-base')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.sigmoid(logits)[0].cpu().detach().numpy()
relevance, specificity = probas
```
The [app](https://huggingface.co/spaces/tinkoff-ai/response-quality-classifiers) where you can easily interact with this model.
The work was done during internship at Tinkoff by [egoriyaa](https://github.com/egoriyaa), mentored by [solemn-leader](https://huggingface.co/solemn-leader). | 2,593 |
PrimeQA/tydiqa-boolean-answer-classifier | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
---
## Model description
An answer classification model for boolean questions based on XLM-RoBERTa.
The answer classifier takes as input a boolean question and a passage, and returns a label (yes, no-answer, no).
The model was initialized with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tuned on the boolean questions from [TyDiQA](https://huggingface.co/datasets/tydiqa), as well as [BoolQ-X](https://arxiv.org/abs/2112.07772#).
## Intended uses & limitations
You can use the raw model for question classification. Biases associated with the pre-existing language model, xlm-roberta-large, may be present in our fine-tuned model, tydiqa-boolean-answer-classifier.
## Usage
You can use this model directly in the the [PrimeQA](https://github.com/primeqa/primeqa) framework for supporting boolean questions in reading comprehension: [examples](https://github.com/primeqa/primeqa/tree/main/examples/boolqa).
### BibTeX entry and citation info
```bibtex
@article{Rosenthal2021DoAT,
title={Do Answers to Boolean Questions Need Explanations? Yes},
author={Sara Rosenthal and Mihaela A. Bornea and Avirup Sil and Radu Florian and Scott McCarley},
journal={ArXiv},
year={2021},
volume={abs/2112.07772}
}
```
```bibtex
@misc{https://doi.org/10.48550/arxiv.2206.08441,
author = {McCarley, Scott and
Bornea, Mihaela and
Rosenthal, Sara and
Ferritto, Anthony and
Sultan, Md Arafat and
Sil, Avirup and
Florian, Radu},
title = {GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions},
journal = {CoRR},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2206.08441},
}
``` | 1,770 |
Tomas23/twitter-roberta-base-mar2022-finetuned-sentiment | [
"negative",
"neutral",
"positive"
] | Entry not found | 15 |
okho0653/Bio_ClinicalBERT-zero-shot-tokenizer-truncation-sentiment-model | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Bio_ClinicalBERT-zero-shot-tokenizer-truncation-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT-zero-shot-tokenizer-truncation-sentiment-model
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,118 |
adamnik/electra-event-detection | null | ---
license: mit
---
| 21 |
Cameron/BERT-mdgender-convai-binary | null | Entry not found | 15 |
LilaBoualili/bert-sim-pair | null | At its core it uses an BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task using the Sim-Pair marking strategy that highlights exact term matches between the query and the passage via marker tokens (#). It can be loaded using the TF/AutoModelForSequenceClassification classes.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking.
| 441 |
SetFit/distilbert-base-uncased__sst5__all-train | [
"negative",
"neutral",
"positive",
"very negative",
"very positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst5__all-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst5__all-train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3757
- Accuracy: 0.5045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2492 | 1.0 | 534 | 1.1163 | 0.4991 |
| 0.9937 | 2.0 | 1068 | 1.1232 | 0.5122 |
| 0.7867 | 3.0 | 1602 | 1.2097 | 0.5045 |
| 0.595 | 4.0 | 2136 | 1.3757 | 0.5045 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,613 |
Narsil/bart-large-mnli-opti | [
"contradiction",
"entailment",
"neutral"
] | ---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
pipeline_tag: zero-shot-classification
datasets:
- multi_nli
---
# bart-large-mnli
This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/datasets/multi_nli) dataset.
Additional information about this model:
- The [bart-large](https://huggingface.co/facebook/bart-large) model page
- [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
](https://arxiv.org/abs/1910.13461)
- [BART fairseq implementation](https://github.com/pytorch/fairseq/tree/master/fairseq/models/bart)
## NLI-based Zero Shot Text Classification
[Yin et al.](https://arxiv.org/abs/1909.00161) proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class "politics", we could construct a hypothesis of `This text is about politics.`. The probabilities for entailment and contradiction are then converted to label probabilities.
This method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html) for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code.
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="facebook/bart-large-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
#{'labels': ['travel', 'dancing', 'cooking'],
# 'scores': [0.9938651323318481, 0.0032737774308770895, 0.002861034357920289],
# 'sequence': 'one day I will see the world'}
```
If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently:
```python
candidate_labels = ['travel', 'cooking', 'dancing', 'exploration']
classifier(sequence_to_classify, candidate_labels, multi_class=True)
#{'labels': ['travel', 'exploration', 'dancing', 'cooking'],
# 'scores': [0.9945111274719238,
# 0.9383890628814697,
# 0.0057061901316046715,
# 0.0018193122232332826],
# 'sequence': 'one day I will see the world'}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli')
tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli')
premise = sequence
hypothesis = f'This example is {label}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
| 3,793 |
anahitapld/dbd_bert_da_simple | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",
"LABEL_3",
"LABEL_30",
"LABEL_31",
"LABEL_32",
"LABEL_33",
"LABEL_34",
"LABEL_35",
"LABEL_36",
"LABEL_37",
"LABEL_38",
"LABEL_39",
"LABEL_4",
"LABEL_40",
"LABEL_41",
"LABEL_42",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
---
| 28 |
StanfordAIMI/covid-radbert | [
"no COVID-19",
"uncertain COVID-19",
"COVID-19"
] | ---
widget:
- text: "procedure: single ap view of the chest comparison: none findings: no surgical hardware nor tubes. lungs, pleura: low lung volumes, bilateral airspace opacities. no pneumothorax or pleural effusion. cardiovascular and mediastinum: the cardiomediastinal silhouette seems stable. impression: 1. patchy bilateral airspace opacities, stable, but concerning for multifocal pneumonia. 2. absence of other suspicions, the rest of the lungs seems fine."
- text: "procedure: single ap view of the chest comparison: none findings: No surgical hardware nor tubes. lungs, pleura: low lung volumes, bilateral airspace opacities. no pneumothorax or pleural effusion. cardiovascular and mediastinum: the cardiomediastinal silhouette seems stable. impression: 1. patchy bilateral airspace opacities, stable. 2. some areas are suggestive that pneumonia can not be excluded. 3. recommended to follow-up shortly and check if there are additional symptoms"
tags:
- text-classification
- pytorch
- transformers
- uncased
- radiology
- biomedical
- covid-19
- covid19
language:
- en
license: mit
---
COVID-RadBERT was trained to detect the presence or absence of COVID-19 within radiology reports, along an "uncertain" diagnostic when further medical tests are required. Manuscript in-proceedings. | 1,299 |
airKlizz/xlm-roberta-base-germeval21-toxic-with-data-augmentation | null | Entry not found | 15 |
aubmindlab/aragpt2-mega-detector-long | [
"human-written",
"machine-generated"
] | ---
language: ar
widget:
- text: "وإذا كان هناك من لا يزال يعتقد أن لبنان هو سويسرا الشرق ، فهو مخطئ إلى حد بعيد . فلبنان ليس سويسرا ، ولا يمكن أن يكون كذلك . لقد عاش اللبنانيون في هذا البلد منذ ما يزيد عن ألف وخمسمئة عام ، أي منذ تأسيس الإمارة الشهابية التي أسسها الأمير فخر الدين المعني الثاني ( 1697 - 1742 )"
---
# AraGPT2 Detector
Machine generated detector model from the [AraGPT2: Pre-Trained Transformer for Arabic Language Generation paper](https://arxiv.org/abs/2012.15520)
This model is trained on the long text passages, and achieves a 99.4% F1-Score.
# How to use it:
```python
from transformers import pipeline
from arabert.preprocess import ArabertPreprocessor
processor = ArabertPreprocessor(model="aubmindlab/araelectra-base-discriminator")
pipe = pipeline("sentiment-analysis", model = "aubmindlab/aragpt2-mega-detector-long")
text = " "
text_prep = processor.preprocess(text)
result = pipe(text_prep)
# [{'label': 'machine-generated', 'score': 0.9977743625640869}]
```
# If you used this model please cite us as :
```
@misc{antoun2020aragpt2,
title={AraGPT2: Pre-Trained Transformer for Arabic Language Generation},
author={Wissam Antoun and Fady Baly and Hazem Hajj},
year={2020},
eprint={2012.15520},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com> | 1,749 |
cardiffnlp/bertweet-base-stance-climate | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | 0 | |
mrm8488/flaubert-small-finetuned-movie-review-sentiment-analysis | null | Entry not found | 15 |
unicamp-dl/mMiniLM-L6-v2-mmarco-v1 | [
"LABEL_0"
] | ---
language: pt
license: mit
tags:
- msmarco
- miniLM
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# mMiniLM-L6-v2 Reranker finetuned on mMARCO
## Introduction
mMiniLM-L6-v2-mmarco-v1 is a multilingual miniLM-based model finetuned on a multilingual version of MS MARCO passage dataset. This dataset, named mMARCO, is formed by passages in 9 different languages, translated from English MS MARCO passages collection.
In the version v1, the datasets were translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import AutoTokenizer, AutoModel
model_name = 'unicamp-dl/mMiniLM-L6-v2-mmarco-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
# Citation
If you use mMiniLM-L6-v2-mmarco-v1, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
| 1,545 |
HiTZ/A2T_RoBERTa_SMFA_WikiEvents-arg_ACE-arg | [
"contradiction",
"entailment",
"neutral"
] | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` | 3,612 |
aomar85/fine-tuned-arabert-random-negative | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: fine-tuned-arabert-random-negative
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-arabert-random-negative
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0080
- Accuracy: 0.9989
- Precision: 0.9990
- Recall: 0.9988
- F1: 0.9989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.0105 | 1.0 | 62920 | 0.0061 | 0.9986 | 0.9993 | 0.9979 | 0.9986 |
| 0.0069 | 2.0 | 125840 | 0.0096 | 0.9986 | 0.9993 | 0.9979 | 0.9986 |
| 0.0058 | 3.0 | 188760 | 0.0084 | 0.9988 | 0.9988 | 0.9988 | 0.9988 |
| 0.0047 | 4.0 | 251680 | 0.0080 | 0.9989 | 0.9990 | 0.9988 | 0.9989 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,864 |
sschellhammer/SciTweets_SciBert | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: cc-by-4.0
widget:
- text: "Study: Shifts in electricity generation spur net job growth, but coal jobs decline - via @DukeU https://www.eurekalert.org/news-releases/637217"
example_title: "All categories"
- text: "Shifts in electricity generation spur net job growth, but coal jobs decline"
example_title: "Only Cat 1.1"
- text: "Study on impacts of electricity generation shift via @DukeU https://www.eurekalert.org/news-releases/637217"
example_title: "Only Cat 1.2 and 1.3"
- text: "@DukeU received grant for research on electricity generation shift"
example_title: "Only Cat 1.3"
---
This SciBert-based multi-label classifier, trained as part of the work "SciTweets - A Dataset and Annotation Framework for Detecting Scientific Online Discourse", distinguishes three different forms of science-relatedness for Tweets. See details at https://github.com/AI-4-Sci/SciTweets . | 896 |
Theivaprakasham/sentence-transformers-paraphrase-MiniLM-L6-v2-twitter_sentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
TransQuest/monotransquest-da-any_en | [
"LABEL_0"
] | ---
language: multilingual-en
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-any_en", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 5,407 |
airKlizz/gbert-base-germeval21-toxic | null | Entry not found | 15 |
arianpasquali/twitter-xlm-roberta-base-sentiment-finetunned | [
"Negative",
"Neutral",
"Positive"
] | Entry not found | 15 |
blizrys/biobert-v1.1-finetuned-pubmedqa | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: biobert-v1.1-finetuned-pubmedqa
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-v1.1-finetuned-pubmedqa
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7737
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8810 | 0.56 |
| No log | 2.0 | 114 | 0.8139 | 0.62 |
| No log | 3.0 | 171 | 0.7963 | 0.68 |
| No log | 4.0 | 228 | 0.7709 | 0.66 |
| No log | 5.0 | 285 | 0.7931 | 0.64 |
| No log | 6.0 | 342 | 0.7420 | 0.7 |
| No log | 7.0 | 399 | 0.7654 | 0.7 |
| No log | 8.0 | 456 | 0.7756 | 0.68 |
| 0.5849 | 9.0 | 513 | 0.7605 | 0.68 |
| 0.5849 | 10.0 | 570 | 0.7737 | 0.7 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| 2,056 |
cardiffnlp/twitter-roberta-base-stance-hillary | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | 0 | |
mariagrandury/roberta-base-finetuned-sms-spam-detection | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- sms_spam
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-sms-spam-detection
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sms_spam
type: sms_spam
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.998
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sms-spam-detection
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the sms_spam dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0133
- Accuracy: 0.998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0363 | 1.0 | 250 | 0.0156 | 0.996 |
| 0.0147 | 2.0 | 500 | 0.0133 | 0.998 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,667 |
persiannlp/parsbert-base-parsinlu-entailment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- parsbert
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
labels = ["entails", "contradicts", "neutral"]
model_name_or_path = "persiannlp/parsbert-base-parsinlu-entailment"
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
model_predict(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
model_predict(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
model_predict(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
| 1,639 |
spencerh/rightpartisan | null | # Text classifier using DistilBERT to determine Partisanship
## This is one of the single-class partisan detecting models. (see leftpartisan/leftcenterpartisan/rightcenterpartisan/centerpartisan)
label_0 refers to "other" while label_1 refers to "right" (right as in right-leaning).
This was trained with 40,000 articles.
### Best Practices
This model was optimized for 512 token-length text. Any text below 150 tokens will result in inaccurate results. | 460 |
searle-j/kote_for_easygoing_people | [
"감동/감탄",
"경악",
"고마움",
"공포/무서움",
"귀찮음",
"기대감",
"기쁨",
"깨달음",
"놀람",
"당황/난처",
"부끄러움",
"부담/안_내킴",
"불쌍함/연민",
"불안/걱정",
"불평/불만",
"비장함",
"뿌듯함",
"서러움",
"슬픔",
"신기함/관심",
"아껴주는",
"안심/신뢰",
"안타까움/실망",
"어이없음",
"없음",
"역겨움/징그러움",
"우쭐댐/무시함",
"의심/불신",
"재미없음",
"절망",
"존경",
"죄책감",
"즐거움/신남",
"증오/혐오",
"지긋지긋",
"짜증",
"패배/자기혐오",
"편안/쾌적",
"한심함",
"행복",
"화남/분노",
"환영/호의",
"흐뭇함(귀여움/예쁨)",
"힘듦/지침"
] | ---
license: mit
---
| 21 |
Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8885
- name: F1
type: f1
value: 0.8818845305609924
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.892
verified: true
- name: Precision Macro
type: precision
value: 0.8923475194643138
verified: true
- name: Precision Micro
type: precision
value: 0.892
verified: true
- name: Precision Weighted
type: precision
value: 0.894495118514709
verified: true
- name: Recall Macro
type: recall
value: 0.768240931585822
verified: true
- name: Recall Micro
type: recall
value: 0.892
verified: true
- name: Recall Weighted
type: recall
value: 0.892
verified: true
- name: F1 Macro
type: f1
value: 0.7897026729904524
verified: true
- name: F1 Micro
type: f1
value: 0.892
verified: true
- name: F1 Weighted
type: f1
value: 0.8842367889371163
verified: true
- name: loss
type: loss
value: 0.34626322984695435
verified: true
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.8885
verified: true
- name: Precision Macro
type: precision
value: 0.8849064522901132
verified: true
- name: Precision Micro
type: precision
value: 0.8885
verified: true
- name: Precision Weighted
type: precision
value: 0.8922726271705158
verified: true
- name: Recall Macro
type: recall
value: 0.7854833401719518
verified: true
- name: Recall Micro
type: recall
value: 0.8885
verified: true
- name: Recall Weighted
type: recall
value: 0.8885
verified: true
- name: F1 Macro
type: f1
value: 0.8031492596189961
verified: true
- name: F1 Micro
type: f1
value: 0.8885
verified: true
- name: F1 Weighted
type: f1
value: 0.8818845305609924
verified: true
- name: loss
type: loss
value: 0.36373236775398254
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3663
- Accuracy: 0.8885
- F1: 0.8819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5574 | 0.822 | 0.7956 |
| 0.7483 | 2.0 | 250 | 0.3663 | 0.8885 | 0.8819 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 4,165 |
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_42 | [
"0",
"1"
] | Entry not found | 15 |
Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection | [
"ham",
"spam"
] | ---
tags:
- generated_from_trainer
datasets:
- sms_spam
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-finetuned-spam-detection
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sms_spam
type: sms_spam
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9928263988522238
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-distilled-finetuned-spam-detection
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the sms_spam dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0938
- Accuracy: 0.9928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4101 | 1.0 | 131 | 0.4930 | 0.9763 |
| 0.8003 | 2.0 | 262 | 0.3999 | 0.9799 |
| 0.377 | 3.0 | 393 | 0.3196 | 0.9828 |
| 0.302 | 4.0 | 524 | 0.3462 | 0.9828 |
| 0.1945 | 5.0 | 655 | 0.1094 | 0.9928 |
| 0.1393 | 6.0 | 786 | 0.0938 | 0.9928 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.12.1
| 2,064 |
mgonnav/finetuning-pysentimiento-war-tweets | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-pysentimiento-war-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-pysentimiento-war-tweets
This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on a dataset of 1500 tweets from Peruvian accounts. It achieves the following results on the evaluation set:
- Loss: 1.7689
- Accuracy: 0.7378
- F1: 0.7456
## Model description
This model in a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) using five labels: **pro_russia**, **against_ukraine**, **neutral**, **against_russia**, **pro_ukraine**.
## Intended uses & limitations
This model shall be used to classify text (more specifically, Spanish tweets) as expressing a position concerning the Russo-Ukrainian war.
## Training and evaluation data
We used an 80/20 training/test split on the aforementioned dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,607 |
postpandas/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9244103213623817
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2204
- Accuracy: 0.9245
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8209 | 1.0 | 250 | 0.3154 | 0.91 | 0.9081 |
| 0.2531 | 2.0 | 500 | 0.2204 | 0.9245 | 0.9244 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,807 |
BritishLibraryLabs/bl-books-genre | null | ---
language: multilingual
tags:
- genre
- books
- library
- historic
- glam
license: mit
metrics:
- f1
widget:
- text: "Poems on various subjects. Whereto is prefixed a short essay on the structure of English verse"
- text: "Two Centuries of Soho: its institutions, firms, and amusements. By the Clergy of St. Anne's, Soho, J. H. Cardwell ... H. B. Freeman ... G. C. Wilton ... assisted by other contributors, etc"
- text: "The Adventures of Oliver Twist. [With plates.]"
---
# British Library Books Genre Detector
**Note** this model card is a work in progress.
## Model description
This fine-tuned [`distilbert-base-cased`](https://huggingface.co/distilbert-base-cased) model is trained to predict whether a book from the [British Library's](https://www.bl.uk/) [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection is `fiction` or `non-fiction` based on the title of the book.
## Intended uses & limitations
This model was trained on data created from the [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection. The datasets in this collection are comprised and derived from 49,455 digitised books (65,227 volumes) largely from the 19th Century. This dataset is dominated by English language books but also includes books in a number of other languages in much smaller numbers. Whilst a subset of this data has metadata relating to Genre, the majority of this dataset does not currently contain this information.
This model was originally developed for use as part of the [Living with Machines](https://livingwithmachines.ac.uk/) project in order to be able to 'segment' this large dataset of books into different categories based on a 'crude' classification of genre i.e. whether the title was `fiction` or `non-fiction`.
Particular areas where the model might be limited are:
### Title format
The model's training data (discussed more below) primarily consists of 19th Century book titles that have been catalogued according to British Library cataloguing practices. Since the approaches taken to cataloguing will vary across institutions running the model on titles from a different catalogue might introduce domain drift and lead to degraded model performance.
To give an example of the types of titles includes in the training data here are 20 random examples:
- 'The Canadian farmer. A missionary incident [Signed: W. J. H. Y, i.e. William J. H. Yates.]
- 'A new musical Interlude, called the Election [By M. P. Andrews.]',
- 'An Elegy written among the ruins of an Abbey. By the author of the Nun [E. Jerningham]',
- "The Baron's Daughter. A ballad by the author of Poetical Recreations [i.e. William C. Hazlitt] . F.P",
- 'A Little Book of Verse, etc',
- 'The Autumn Leaf Poems',
- 'The Battle of Waterloo, a poem',
- 'Maximilian, and other poems, etc',
- 'Fabellæ mostellariæ: or Devonshire and Wiltshire stories in verse; including specimens of the Devonshire dialect',
- 'The Grave of a Hamlet and other poems, chiefly of the Hebrides ... Selected, with an introduction, by his son J. Hogben']
### Date
The model was trained on data that spans the collection period of the [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection. This dataset covers a broad period (from 1500-1900). However, this dataset is skewed towards later years. The subset of training data i.e. data with genre annotations used to train this model has the following distribution for dates:
| | Date |
|-------|------------|
| mean | 1864.83 |
| std | 43.0199 |
| min | 1540 |
| 25% | 1847 |
| 50% | 1877 |
| 75% | 1893 |
### Language
Whilst the model is multilingual in so far as it has training data in non-English book titles, these appear much less frequently. An overview of the original training data's language counts are as follows:
| Language | Count |
|---------------------|-------|
| English | 22987 |
| Russian | 461 |
| French | 424 |
| Spanish | 366 |
| German | 347 |
| Dutch | 310 |
| Italian | 212 |
| Swedish | 186 |
| Danish | 164 |
| Hungarian | 132 |
| Polish | 112 |
| Latin | 83 |
| Greek,Modern(1453-) | 42 |
| Czech | 25 |
| Portuguese | 24 |
| Finnish | 14 |
| Serbian | 10 |
| Bulgarian | 7 |
| Icelandic | 4 |
| Irish | 4 |
| Hebrew | 2 |
| NorwegianNynorsk | 2 |
| Lithuanian | 2 |
| Slovenian | 2 |
| Cornish | 1 |
| Romanian | 1 |
| Slovak | 1 |
| Scots | 1 |
| Sanskrit | 1 |
#### How to use
There are a few different ways to use the model. To run the model locally the easiest option is to use the 🤗 Transformers [`pipelines`](https://huggingface.co/transformers/main_classes/pipelines.html):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("davanstrien/bl-books-genre")
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/bl-books-genre")
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
classifier("Oliver Twist")
```
This will return a dictionary with our predicted label and score
```
[{'label': 'Fiction', 'score': 0.9980145692825317}]
```
If you intend to use this model beyond initial experimentation, it is highly recommended to create some data to validate the model's predictions. As the model was trained on a specific corpus of books titles, it is also likely to be beneficial to fine-tune the model if you want to run it across a collection of book titles that differ from those in the training corpus.
## Training data
The training data for this model will soon be available from the British Libary Research Repository. This section will be updated once this dataset is made public.
The training data was created using the [Zooniverse platform](zooniverse.org/) and the annotations were done by cataloguers from the [British Library](https://www.bl.uk/). [Snorkel](https://github.com/snorkel-team/snorkel) was used to expand on this original training data through various labelling functions. As a result, some of the labels are *not* generated by a human. More information on the process of creating the annotations will soon be available as part of a series of tutorials documenting this piece of work.
## Training procedure
The model was trained using the [`blurr`](https://github.com/ohmeow/blurr) library. A notebook showing the training process will be made available soon.
## Eval results
The results of the model on a held-out training set are:
```
precision recall f1-score support
Fiction 0.88 0.97 0.92 296
Non-Fiction 0.98 0.93 0.95 554
accuracy 0.94 850
macro avg 0.93 0.95 0.94 850
weighted avg 0.95 0.94 0.94 850
```
As discussed briefly in the bias and limitation sections of the model these results should be treated with caution. ** | 7,905 |
albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135 | [
"0",
"1",
"2",
"3",
"4",
"5"
] | ---
tags: autonlp
language: bn
widget:
- text: "I love AutoNLP 🤗"
datasets:
- albertvillanova/autonlp-data-indic_glue-multi_class_classification-1e67664
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 1311135
## Validation Metrics
- Loss: 0.35616958141326904
- Accuracy: 0.8979447200566973
- Macro F1: 0.8545383956197669
- Micro F1: 0.8979447200566975
- Weighted F1: 0.8983951947775538
- Macro Precision: 0.8615833774439791
- Micro Precision: 0.8979447200566973
- Weighted Precision: 0.9013559365881655
- Macro Recall: 0.8516503001777104
- Micro Recall: 0.8979447200566973
- Weighted Recall: 0.8979447200566973
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,455 |
hyunwoongko/jaberta-base-ja-xnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
batterydata/batteryscibert-uncased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatterySciBERT-uncased for Battery Abstract Classification
**Language model:** batteryscibert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 14
base_LM_model = "batteryscibert-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.12,
"Test accuracy": 97.47,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
| 1,474 |
ibm/roberta-large-vira-intents | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_100",
"LABEL_101",
"LABEL_102",
"LABEL_103",
"LABEL_104",
"LABEL_105",
"LABEL_106",
"LABEL_107",
"LABEL_108",
"LABEL_109",
"LABEL_11",
"LABEL_110",
"LABEL_111",
"LABEL_112",
"LABEL_113",
"LABEL_114",
"LABEL_115",
"LABEL_116",
"LABEL_117",
"LABEL_118",
"LABEL_119",
"LABEL_12",
"LABEL_120",
"LABEL_121",
"LABEL_122",
"LABEL_123",
"LABEL_124",
"LABEL_125",
"LABEL_126",
"LABEL_127",
"LABEL_128",
"LABEL_129",
"LABEL_13",
"LABEL_130",
"LABEL_131",
"LABEL_132",
"LABEL_133",
"LABEL_134",
"LABEL_135",
"LABEL_136",
"LABEL_137",
"LABEL_138",
"LABEL_139",
"LABEL_14",
"LABEL_140",
"LABEL_141",
"LABEL_142",
"LABEL_143",
"LABEL_144",
"LABEL_145",
"LABEL_146",
"LABEL_147",
"LABEL_148",
"LABEL_149",
"LABEL_15",
"LABEL_150",
"LABEL_151",
"LABEL_152",
"LABEL_153",
"LABEL_154",
"LABEL_155",
"LABEL_156",
"LABEL_157",
"LABEL_158",
"LABEL_159",
"LABEL_16",
"LABEL_160",
"LABEL_161",
"LABEL_162",
"LABEL_163",
"LABEL_164",
"LABEL_165",
"LABEL_166",
"LABEL_167",
"LABEL_168",
"LABEL_169",
"LABEL_17",
"LABEL_170",
"LABEL_171",
"LABEL_172",
"LABEL_173",
"LABEL_174",
"LABEL_175",
"LABEL_176",
"LABEL_177",
"LABEL_178",
"LABEL_179",
"LABEL_18",
"LABEL_180",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",
"LABEL_3",
"LABEL_30",
"LABEL_31",
"LABEL_32",
"LABEL_33",
"LABEL_34",
"LABEL_35",
"LABEL_36",
"LABEL_37",
"LABEL_38",
"LABEL_39",
"LABEL_4",
"LABEL_40",
"LABEL_41",
"LABEL_42",
"LABEL_43",
"LABEL_44",
"LABEL_45",
"LABEL_46",
"LABEL_47",
"LABEL_48",
"LABEL_49",
"LABEL_5",
"LABEL_50",
"LABEL_51",
"LABEL_52",
"LABEL_53",
"LABEL_54",
"LABEL_55",
"LABEL_56",
"LABEL_57",
"LABEL_58",
"LABEL_59",
"LABEL_6",
"LABEL_60",
"LABEL_61",
"LABEL_62",
"LABEL_63",
"LABEL_64",
"LABEL_65",
"LABEL_66",
"LABEL_67",
"LABEL_68",
"LABEL_69",
"LABEL_7",
"LABEL_70",
"LABEL_71",
"LABEL_72",
"LABEL_73",
"LABEL_74",
"LABEL_75",
"LABEL_76",
"LABEL_77",
"LABEL_78",
"LABEL_79",
"LABEL_8",
"LABEL_80",
"LABEL_81",
"LABEL_82",
"LABEL_83",
"LABEL_84",
"LABEL_85",
"LABEL_86",
"LABEL_87",
"LABEL_88",
"LABEL_89",
"LABEL_9",
"LABEL_90",
"LABEL_91",
"LABEL_92",
"LABEL_93",
"LABEL_94",
"LABEL_95",
"LABEL_96",
"LABEL_97",
"LABEL_98",
"LABEL_99"
] | ---
language:
- en
tags:
- intent detection
license: "other"
datasets:
- ibm/vira-intents
metrics:
- accuracy
widget:
- text: "Should I be concerned about side effects of the vaccine if I'm breastfeeding?} & Is breastfeeding safe with the vaccine"
example_title: "Breastfeeding"
- text: "Does the vaccine prevent transmission?"
example_title: "Transmission"
- text: "Will the vaccine make me sterile or infertile? "
example_title: "Infertility"
---
## Model Description
This model is based on RoBERTa large (Liu, 2019), fine-tuned on a dataset of intent expressions available [here](https://research.ibm.com/haifa/dept/vst/debating_data.shtml) and also on 🤗 Transformer datasets hub [here](https://huggingface.co/datasets/ibm/vira-intents).
The model was created as part of the work described in [Benchmark Data and Evaluation Framework for Intent Discovery Around COVID-19 Vaccine Hesitancy
](https://arxiv.org/abs/2205.11966). The model is released under the Community Data License Agreement - Sharing - Version 1.0 ([link](https://cdla.dev/sharing-1-0/)), If you use this model, please cite our paper.
The official GitHub is [here](https://github.com/IBM/vira-intent-discovery). The script used for training the model is [trainer.py](https://github.com/IBM/vira-intent-discovery/blob/master/trainer.py).
## Training parameters
1. base_model = 'roberta-large'
1. learning_rate=5e-6
1. per_device_train_batch_size=16,
1. per_device_eval_batch_size=16,
1. num_train_epochs=15,
1. load_best_model_at_end=True,
1. save_total_limit=1,
1. save_strategy='epoch',
1. evaluation_strategy='epoch',
1. metric_for_best_model='accuracy',
1. seed=123
## Data collator
DataCollatorWithPadding
| 1,693 |
RJuro/Da-HyggeBERT | [
"afsky",
"begær",
"beundring",
"forlegenhed",
"fornøjelse",
"fortrydelse",
"forvirring",
"frygt",
"glæde",
"indsigt",
"irritation",
"kærlighed",
"lettelse",
"medhold",
"misbilligelse",
"nervøsitet",
"neutral",
"nysgerrighed",
"omsorg",
"optimisme",
"overraskelse",
"skuffelse",
"sorg",
"spænding",
"stolthed",
"taknemmelighed",
"tristhed",
"vrede"
] | ---
language: da
tags:
- danish
- bert
- sentiment
- text-classification
- Maltehb/danish-bert-botxo
- Helsinki-NLP/opus-mt-en-da
- go-emotion
- Certainly
license: cc-by-4.0
datasets:
- go_emotions
metrics:
- Accuracy
widget:
- text: "Det er så sødt af dig at tænke på andre på den måde ved du det?"
- text: "Jeg vil gerne have en playstation."
- text: "Jeg elsker dig"
- text: "Hvordan håndterer jeg min irriterende nabo?"
---
# Danish-Bert-GoÆmotion
Danish Go-Emotions classifier. [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) (uncased) finetuned on a translation of the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset using [Helsinki-NLP/opus-mt-en-da](https://huggingface.co/Helsinki-NLP/opus-mt-de-en). Thus, performance is obviousely dependent on the translation model.
## Training
- Translating the training data with MT: [Notebook](https://colab.research.google.com/github/RJuro/Da-HyggeBERT-finetuning/blob/main/HyggeBERT_translation_en_da.ipynb)
- Fine-tuning danish-bert-botxo: coming soon...
## Training Parameters:
```
Num examples = 189900
Num Epochs = 3
Train batch = 8
Eval batch = 8
Learning Rate = 3e-5
Warmup steps = 4273
Total optimization steps = 71125
```
## Loss
### Training loss

### Eval. loss
```
0.1178 (21100 examples)
```
## Using the model with `transformers`
Easiest use with `transformers` and `pipeline`:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model = AutoModelForSequenceClassification.from_pretrained('RJuro/Da-HyggeBERT')
tokenizer = AutoTokenizer.from_pretrained('RJuro/Da-HyggeBERT')
classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
classifier('jeg elsker dig')
```
`[{'label': 'kærlighed', 'score': 0.9634820818901062}]`
## Using the model with `simpletransformers`
```python
from simpletransformers.classification import MultiLabelClassificationModel
model = MultiLabelClassificationModel('bert', 'RJuro/Da-HyggeBERT')
predictions, raw_outputs = model.predict(df['text'])
``` | 2,086 |
Team-PIXEL/pixel-base-finetuned-sst2 | [
"negative",
"positive"
] | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-sst2
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE SST2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
| 1,188 |
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | [
"negative",
"neutral",
"positive"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT-CA SA Model
## Model description
**CAMeLBERT-CA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
e
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | 3,364 |
Cameron/BERT-SBIC-offensive | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Connor-tech/bert_cn_finetuning | [
"LABEL_0",
"LABEL_1"
] | Entry not found | 15 |
Maelstrom77/roberta-large-mnli | [
"CONTRADICTION",
"ENTAILMENT",
"NEUTRAL"
] | Entry not found | 15 |
RecordedFuture/Swedish-Sentiment-Fear | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: sv
license: mit
---
## Swedish BERT models for sentiment analysis
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for sentiment analysis in Swedish. The two models are based on the [KB\/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) model and has been fine-tuned to solve a multi-label sentiment analysis task.
The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes.
The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Swedish-Sentiment-Fear
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
When the model and tokenizer are initialized the model can be used for inference.
#### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Hold an expressive emphasis on fear and/ or anxiety
#### The weak sentiment includes but are not limited to
Texts that:
- Express fear and/ or anxiety in a neutral way
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.45 | 0.8754 | 0.8618 | 0.8895 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
When the model and tokenizer are initialized the model can be used for inference.
### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Referencing highly violent acts
- Hold an aggressive tone
#### The weak sentiment includes but are not limited to
Texts that:
- Include general violent statements that do not fall under the strong sentiment
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.35 | 0.7677 | 0.7456 | 0.791 | | 3,299 |
blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.72
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6748
- Accuracy: 0.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8396 | 0.58 |
| No log | 2.0 | 114 | 0.8608 | 0.58 |
| No log | 3.0 | 171 | 0.7642 | 0.68 |
| No log | 4.0 | 228 | 0.8196 | 0.64 |
| No log | 5.0 | 285 | 0.6477 | 0.72 |
| No log | 6.0 | 342 | 0.6861 | 0.72 |
| No log | 7.0 | 399 | 0.6735 | 0.74 |
| No log | 8.0 | 456 | 0.6516 | 0.72 |
| 0.6526 | 9.0 | 513 | 0.6707 | 0.72 |
| 0.6526 | 10.0 | 570 | 0.6748 | 0.72 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
| 2,229 |
boychaboy/MNLI_albert-base-v2 | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
cardiffnlp/bertweet-base-stance-hillary | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | 0 | |
lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: en
tags:
- textual-entailment
- nli
- pytorch
datasets:
- mnli
license: mit
widget :
- text: "EpCAM is overexpressed in breast cancer. </s></s> EpCAM is downregulated in breast cancer."
---
# BiomedNLP-PubMedBERT finetuned on textual entailment (NLI)
The [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext?text=%5BMASK%5D+is+a+tumor+suppressor+gene) finetuned on the MNLI dataset. It should be useful in textual entailment tasks involving biomedical corpora.
## Usage
Given two sentences (a premise and a hypothesis), the model outputs the logits of entailment, neutral or contradiction.
You can test the model using the HuggingFace model widget on the side:
- Input two sentences (premise and hypothesis) one after the other.
- The model returns the probabilities of 3 labels: entailment(LABEL:0), neutral(LABEL:1) and contradiction(LABEL:2) respectively.
To use the model locally on your machine:
```python
# import torch
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli")
model = AutoModelForSequenceClassification.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli")
premise = 'EpCAM is overexpressed in breast cancer'
hypothesis = 'EpCAM is downregulated in breast cancer.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = model(x)[0]
probs = logits.softmax(dim=1)
print('Probabilities for entailment, neutral, contradiction \n', np.around(probs.cpu().
detach().numpy(),3))
# Probabilities for entailment, neutral, contradiction
# 0.001 0.001 0.998
```
## Metrics
Evaluation on classification accuracy (entailment, contradiction, neutral) on MNLI test set:
| Metric | Value |
| --- | --- |
| Accuracy | 0.8338|
See Training Metrics tab for detailed info. | 2,251 |
patrickvonplaten/deberta_v3_amazon_reviews | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta_v3_amazon_reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_v3_amazon_reviews
This model is a fine-tuned version of [patrickvonplaten/deberta_v3_amazon_reviews](https://huggingface.co/patrickvonplaten/deberta_v3_amazon_reviews) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 2
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,097 |
Hate-speech-CNERG/english-abusive-MuRIL | null | ---
language: en
license: afl-3.0
---
This model is used detecting **abusive speech** in **English**. It is finetuned on MuRIL model using English abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ | 960 |
Cristian-dcg/beto-sentiment-analysis-finetuned-onpremise | [
"NEG",
"NEU",
"POS"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: beto-sentiment-analysis-finetuned-onpremise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto-sentiment-analysis-finetuned-onpremise
This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7939
- Accuracy: 0.8301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4573 | 1.0 | 1250 | 0.4375 | 0.8191 |
| 0.2191 | 2.0 | 2500 | 0.5367 | 0.8288 |
| 0.1164 | 3.0 | 3750 | 0.7939 | 0.8301 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.18.4
- Tokenizers 0.12.1
| 1,525 |
Clody0071/distilbert-base-multilingual-cased-finetuned-similarite | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pawsx
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-similarite
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: pawsx
type: pawsx
args: fr
metrics:
- name: Accuracy
type: accuracy
value: 0.7995
- name: F1
type: f1
value: 0.7994565743967147
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-similarite
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the pawsx dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4781
- Accuracy: 0.7995
- F1: 0.7995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5343 | 1.0 | 772 | 0.4879 | 0.7705 | 0.7714 |
| 0.3523 | 2.0 | 1544 | 0.4781 | 0.7995 | 0.7995 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,843 |
binay1999/text_classification_cybertexts | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: text_classification_cybertexts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_classification_cybertexts
This model is a fine-tuned version of [binay1999/distilbert-cybertexts-preprocessed](https://huggingface.co/binay1999/distilbert-cybertexts-preprocessed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.0333 | 1.0 | 38750 | 0.0389 |
| 0.0271 | 2.0 | 77500 | 0.0284 |
| 0.0135 | 3.0 | 116250 | 0.0330 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
| 1,431 |
Maha/xlmtwtroberta_label2 | null | Entry not found | 15 |
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | [
"البسيط",
"الخفيف",
"الدوبيت",
"الرجز",
"الرمل",
"السريع",
"السلسلة",
"الطويل",
"الكامل",
"المتدارك",
"المتقارب",
"المجتث",
"المديد",
"المضارع",
"المقتضب",
"المنسرح",
"المواليا",
"الهزج",
"الوافر",
"شعر التفعيلة",
"شعر حر",
"عامي",
"موشح"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-DA Poetry Classification Model
## Model description
**CAMeLBERT-DA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9874765276908875},
{'label': 'السلسلة', 'score': 0.6877778172492981}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | 3,383 |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6 | [
"BEI",
"CAI",
"DOH",
"MSA",
"RAB",
"TUN"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-Mix DID MADAR Corpus6 Model
## Model description
**CAMeLBERT-Mix DID MADAR Corpus6 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [MADAR Corpus 6](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 6 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar6')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'CAI', 'score': 0.9996405839920044},
{'label': 'DOH', 'score': 0.9997853636741638}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | 2,938 |
DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support | [
"not-applicable\n",
"ok\n",
"too-loose\n",
"too-strict\n"
] | ---
language: "multilingual"
tags:
- Dutch
- French
- English
- Tweets
- Sentiment analysis
widget:
- text: "I really wish I could leave my house after midnight, this makes no sense!"
---
# Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT
[Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947)
This model can be used to determine if a tweet expresses support or not for a curfew. The model was trained on manually labeled tweets from Belgium in Dutch, French and English.
We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top).

Models used in this paper are on HuggingFace:
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
| 1,363 |
ItcastAI/bert_finetuning_test | null | Entry not found | 15 |
emrecan/bert-base-multilingual-cased-allnli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: mit
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased_allnli_tr
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6144
- Accuracy: 0.7662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8623 | 0.03 | 1000 | 0.9076 | 0.5917 |
| 0.7528 | 0.07 | 2000 | 0.8587 | 0.6119 |
| 0.7074 | 0.1 | 3000 | 0.7867 | 0.6647 |
| 0.6949 | 0.14 | 4000 | 0.7474 | 0.6772 |
| 0.6681 | 0.17 | 5000 | 0.7661 | 0.6814 |
| 0.6597 | 0.2 | 6000 | 0.7264 | 0.6943 |
| 0.6495 | 0.24 | 7000 | 0.7841 | 0.6781 |
| 0.6323 | 0.27 | 8000 | 0.7256 | 0.6952 |
| 0.6308 | 0.31 | 9000 | 0.7319 | 0.6958 |
| 0.6254 | 0.34 | 10000 | 0.7054 | 0.7004 |
| 0.6233 | 0.37 | 11000 | 0.7069 | 0.7085 |
| 0.6165 | 0.41 | 12000 | 0.6880 | 0.7181 |
| 0.6033 | 0.44 | 13000 | 0.6844 | 0.7197 |
| 0.6014 | 0.48 | 14000 | 0.6753 | 0.7129 |
| 0.5947 | 0.51 | 15000 | 0.7000 | 0.7039 |
| 0.5965 | 0.54 | 16000 | 0.6708 | 0.7263 |
| 0.5979 | 0.58 | 17000 | 0.6562 | 0.7285 |
| 0.5787 | 0.61 | 18000 | 0.6554 | 0.7297 |
| 0.58 | 0.65 | 19000 | 0.6544 | 0.7315 |
| 0.574 | 0.68 | 20000 | 0.6549 | 0.7339 |
| 0.5751 | 0.71 | 21000 | 0.6545 | 0.7289 |
| 0.5659 | 0.75 | 22000 | 0.6467 | 0.7371 |
| 0.5732 | 0.78 | 23000 | 0.6448 | 0.7362 |
| 0.5637 | 0.82 | 24000 | 0.6520 | 0.7355 |
| 0.5648 | 0.85 | 25000 | 0.6412 | 0.7345 |
| 0.5622 | 0.88 | 26000 | 0.6350 | 0.7358 |
| 0.5579 | 0.92 | 27000 | 0.6347 | 0.7393 |
| 0.5518 | 0.95 | 28000 | 0.6417 | 0.7392 |
| 0.5547 | 0.99 | 29000 | 0.6321 | 0.7437 |
| 0.524 | 1.02 | 30000 | 0.6430 | 0.7412 |
| 0.4982 | 1.05 | 31000 | 0.6253 | 0.7458 |
| 0.5002 | 1.09 | 32000 | 0.6316 | 0.7418 |
| 0.4993 | 1.12 | 33000 | 0.6197 | 0.7487 |
| 0.4963 | 1.15 | 34000 | 0.6307 | 0.7462 |
| 0.504 | 1.19 | 35000 | 0.6272 | 0.7480 |
| 0.4922 | 1.22 | 36000 | 0.6410 | 0.7433 |
| 0.5016 | 1.26 | 37000 | 0.6295 | 0.7461 |
| 0.4957 | 1.29 | 38000 | 0.6183 | 0.7506 |
| 0.4883 | 1.32 | 39000 | 0.6261 | 0.7502 |
| 0.4985 | 1.36 | 40000 | 0.6315 | 0.7496 |
| 0.4885 | 1.39 | 41000 | 0.6189 | 0.7529 |
| 0.4909 | 1.43 | 42000 | 0.6189 | 0.7473 |
| 0.4894 | 1.46 | 43000 | 0.6314 | 0.7433 |
| 0.4912 | 1.49 | 44000 | 0.6184 | 0.7446 |
| 0.4851 | 1.53 | 45000 | 0.6258 | 0.7461 |
| 0.4879 | 1.56 | 46000 | 0.6286 | 0.7480 |
| 0.4907 | 1.6 | 47000 | 0.6196 | 0.7512 |
| 0.4884 | 1.63 | 48000 | 0.6157 | 0.7526 |
| 0.4755 | 1.66 | 49000 | 0.6056 | 0.7591 |
| 0.4811 | 1.7 | 50000 | 0.5977 | 0.7582 |
| 0.4787 | 1.73 | 51000 | 0.5915 | 0.7621 |
| 0.4779 | 1.77 | 52000 | 0.6014 | 0.7583 |
| 0.4767 | 1.8 | 53000 | 0.6041 | 0.7623 |
| 0.4737 | 1.83 | 54000 | 0.6093 | 0.7563 |
| 0.4836 | 1.87 | 55000 | 0.6001 | 0.7568 |
| 0.4765 | 1.9 | 56000 | 0.6109 | 0.7601 |
| 0.4776 | 1.94 | 57000 | 0.6046 | 0.7599 |
| 0.4769 | 1.97 | 58000 | 0.5970 | 0.7568 |
| 0.4654 | 2.0 | 59000 | 0.6147 | 0.7614 |
| 0.4144 | 2.04 | 60000 | 0.6439 | 0.7566 |
| 0.4101 | 2.07 | 61000 | 0.6373 | 0.7527 |
| 0.4192 | 2.11 | 62000 | 0.6136 | 0.7575 |
| 0.4128 | 2.14 | 63000 | 0.6283 | 0.7560 |
| 0.4204 | 2.17 | 64000 | 0.6187 | 0.7625 |
| 0.4114 | 2.21 | 65000 | 0.6127 | 0.7621 |
| 0.4097 | 2.24 | 66000 | 0.6188 | 0.7626 |
| 0.4129 | 2.28 | 67000 | 0.6156 | 0.7639 |
| 0.4085 | 2.31 | 68000 | 0.6232 | 0.7616 |
| 0.4074 | 2.34 | 69000 | 0.6240 | 0.7605 |
| 0.409 | 2.38 | 70000 | 0.6153 | 0.7591 |
| 0.4046 | 2.41 | 71000 | 0.6375 | 0.7587 |
| 0.4117 | 2.45 | 72000 | 0.6145 | 0.7629 |
| 0.4002 | 2.48 | 73000 | 0.6279 | 0.7610 |
| 0.4042 | 2.51 | 74000 | 0.6176 | 0.7646 |
| 0.4055 | 2.55 | 75000 | 0.6277 | 0.7643 |
| 0.4021 | 2.58 | 76000 | 0.6196 | 0.7642 |
| 0.4081 | 2.62 | 77000 | 0.6127 | 0.7659 |
| 0.408 | 2.65 | 78000 | 0.6237 | 0.7638 |
| 0.3997 | 2.68 | 79000 | 0.6190 | 0.7636 |
| 0.4093 | 2.72 | 80000 | 0.6152 | 0.7648 |
| 0.4095 | 2.75 | 81000 | 0.6155 | 0.7627 |
| 0.4088 | 2.79 | 82000 | 0.6130 | 0.7641 |
| 0.4063 | 2.82 | 83000 | 0.6072 | 0.7646 |
| 0.3978 | 2.85 | 84000 | 0.6128 | 0.7662 |
| 0.4034 | 2.89 | 85000 | 0.6157 | 0.7627 |
| 0.4044 | 2.92 | 86000 | 0.6127 | 0.7661 |
| 0.403 | 2.96 | 87000 | 0.6126 | 0.7664 |
| 0.4033 | 2.99 | 88000 | 0.6144 | 0.7662 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
| 7,067 |
lewtun/xlm-roberta-base-finetuned-marc | [
"good",
"great",
"ok",
"poor",
"terrible"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9932
- Mae: 0.4838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.05 | 1.0 | 860 | 1.0007 | 0.5074 |
| 0.9166 | 2.0 | 1720 | 0.9932 | 0.4838 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| 1,423 |
lighteternal/nli-xlm-r-greek | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- el
- en
tags:
- xlm-roberta-base
datasets:
- multi_nli
- snli
- allnli_greek
metrics:
- accuracy
pipeline_tag: zero-shot-classification
widget:
- text: "Η Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας."
candidate_labels: "τεχνολογία, πολιτική, αθλητισμός"
multi_class: false
license: apache-2.0
---
# Cross-Encoder for Greek Natural Language Inference (Textual Entailment) & Zero-Shot Classification
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
The model was trained on the the combined Greek+English version of the AllNLI dataset(sum of [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/)). The Greek part was created using the EN2EL NMT model available [here](https://huggingface.co/lighteternal/SSE-TUC-mt-en-el-cased).
The model can be used in two ways:
* NLI/Textual Entailment: For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
* Zero-shot classification through the Huggingface pipeline: Given a sentence and a set of labels/topics, it will output the likelihood of the sentence belonging to each of the topic. Under the hood, the logit for entailment between the sentence and each label is taken as the logit for the candidate label being valid.
## Performance
Evaluation on classification accuracy (entailment, contradiction, neutral) on mixed (Greek+English) AllNLI-dev set:
| Metric | Value |
| --- | --- |
| Accuracy | 0.8409 |
## To use the model for NLI/Textual Entailment
#### Usage with sentence_transformers
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('lighteternal/nli-xlm-r-greek')
scores = model.predict([('Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'),
('Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο'),
('Δυο γυναίκες μιλάνε στο κινητό', 'Το τραπέζι ήταν πράσινο')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
print(scores, labels)
# Οutputs
#[[-3.1526504 2.9981945 -0.3108107]
# [ 5.0549307 -2.757949 -1.6220676]
# [-0.5124733 -2.2671669 3.1630592]] ['entailment', 'contradiction', 'neutral']
```
#### Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('lighteternal/nli-xlm-r-greek')
tokenizer = AutoTokenizer.from_pretrained('lighteternal/nli-xlm-r-greek')
features = tokenizer(['Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'],
['Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο.'],
padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## To use the model for Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='lighteternal/nli-xlm-r-greek')
sent = "Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας"
candidate_labels = ["πολιτική", "τεχνολογία", "αθλητισμός"]
res = classifier(sent, candidate_labels)
print(res)
#outputs:
#{'sequence': 'Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας', 'labels': ['τεχνολογία', 'αθλητισμός', 'πολιτική'], 'scores': [0.8380699157714844, 0.09086982160806656, 0.07106029987335205]}
```
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
### Citation info
Citation for the Greek model TBA.
Based on the work [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084)
Kudos to @nreimers (Nils Reimers) for his support on Github .
| 4,773 |
maxpe/twitter-roberta-base_semeval18_emodetection | null | # Twitter-roBERTa-base_SemEval18_Emodetection
This is a Twitter-roBERTa-base model trained on ~7000 tweets in English annotated for 11 emotion categories in [SemEval-2018 Task 1: Affect in Tweets: SubTask 5: Emotion Classification](https://competitions.codalab.org/competitions/17751).
Run the classifier on the test set of the competition:
```python
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModel
from torch.utils.data import DataLoader
import torch
import pandas as pd
# choose GPU when available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base",model_max_length=512)
# build custom model with classification layer on top and a dropout layer before
class RobertaClass(torch.nn.Module):
def __init__(self):
super(RobertaClass, self).__init__()
self.l1 = AutoModel.from_pretrained("cardiffnlp/twitter-roberta-base",return_dict=False)
self.l2 = torch.nn.Dropout(0.3)
self.l3 = torch.nn.Linear(768, 11)
def forward(self, input_ids, attention_mask):
_, output_1= self.l1(input_ids=input_ids, attention_mask=attention_mask)
output_2 = self.l2(output_1)
output = self.l3(output_2)
return output
model_name="twitter-roberta-base_semeval18_emodetection/pytorch_model.bin"
model=RobertaClass()
model.load_state_dict(torch.load(model_name,map_location=torch.device(device)))
model.eval()
# run on more than 1 GPU
model = torch.nn.DataParallel(model)
model.to(device)
twnames=['anger','anticipation','disgust','fear','joy','love','optimism','pessimism','sadness','surprise','trust']
# load from hugging face dataset hub
testset_raw = load_dataset('sem_eval_2018_task_1','subtask5.english',split='test')
# remove old columns
testset=testset_raw.remove_columns(twnames+["ID"])
# tokenize
testset_tokenized = testset.map(lambda e: tokenizer(e['Tweet'], truncation=True, padding='max_length'), batched=True)
testset_tokenized=testset_tokenized.remove_columns("Tweet")
testset_tokenized.set_format(type='torch', columns=['input_ids', 'attention_mask'])
outfile="predicted_2018-E-c-En-test-gold.txt"
MAX_LEN = 512
VALID_BATCH_SIZE = 8
# set batch size according to available RAM
# VALID_BATCH_SIZE = 1000
# set num_workers for parallel processing
inference_params = {'batch_size': VALID_BATCH_SIZE,
'shuffle': False,
# 'num_workers': 1
}
inference_loader = DataLoader(testset_tokenized, **inference_params)
open(outfile,"w").close()
with torch.no_grad():
# change lines for progress manager
# for _, data in tqdm(enumerate(inference_loader, 0),total=len(inference_loader)):
for _, data in enumerate(inference_loader, 0):
outputs = model(input_ids=data['input_ids'],attention_mask=data['attention_mask'])
fin_outputs=torch.sigmoid(outputs).cpu().detach().numpy().tolist()
pd.DataFrame(fin_outputs).to_csv(outfile,index=False,header=False,sep="\t",mode='a')
# # dataset from file (one text per line)
# from datasets import Dataset
# with open(linesoftextfile,"rb") as textfile:
# textdict={"text":[x.decode().rstrip("\n") for x in textfile.readlines()]}
# inference_dataset=Dataset.from_dict(textdict)
# del(textdict)
``` | 3,356 |
mnaylor/bigbird-base-mimic-mortality | null | # BigBird for Mortality Prediction
Starting with Google's base BigBird model, we fine-tuned on binary mortality prediction in MIMIC admission notes. This
model seeks to predict whether a certain patient will expire within a given ICU stay, based on the text available upon
admission. Data prepared for this task as described in [this project](https://github.com/bvanaken/clinical-outcome-prediction),
using the simulated admission notes (taken from discharge summaries). This model will be used in an upcoming submission for
IMLH at ICML 2021.
### References
* Van Aken, et al., 2021: [Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75/)
* Zaheer, et al., 2020: [Big Bird: Transformers for Longer Sequences](https://papers.nips.cc/paper/2020/hash/c8512d142a2d849725f31a9a7a361ab9-Abstract.html) | 895 |
shiyue/roberta-large-tac08 | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
BaxterAI/SentimentClassifier | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
- f1
model-index:
- name: SentimentClassifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.91
- name: F1
type: f1
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SentimentClassifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4425
- Accuracy: 0.91
- F1: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,498 |
anahitapld/electra-small-dbd | null | ---
license: apache-2.0
---
| 28 |
amanbawa96/roberta_Aman | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",
"LABEL_3",
"LABEL_30",
"LABEL_31",
"LABEL_32",
"LABEL_33",
"LABEL_34",
"LABEL_35",
"LABEL_36",
"LABEL_37",
"LABEL_38",
"LABEL_39",
"LABEL_4",
"LABEL_40",
"LABEL_41",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.