index int64 0 22.3k | modelId stringlengths 8 111 | label list | readme stringlengths 0 385k |
|---|---|---|---|
869 | cointegrated/rubert-base-cased-nli-threeway | [
"contradiction",
"entailment",
"neutral"
] | ---
language: ru
pipeline_tag: zero-shot-classification
tags:
- rubert
- russian
- nli
- rte
- zero-shot-classification
widget:
- text: "Я хочу поехать в Австралию"
candidate_labels: "спорт,путешествия,музыка,кино,книги,наука,политика"
hypothesis_template: "Тема текста - {}."
---
# RuBERT for NLI (natural language inference)
This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment, contradiction, or neutral.
## Usage
How to run the model for NLI:
```python
# !pip install transformers sentencepiece --quiet
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_checkpoint = 'cointegrated/rubert-base-cased-nli-threeway'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
if torch.cuda.is_available():
model.cuda()
text1 = 'Сократ - человек, а все люди смертны.'
text2 = 'Сократ никогда не умрёт.'
with torch.inference_mode():
out = model(**tokenizer(text1, text2, return_tensors='pt').to(model.device))
proba = torch.softmax(out.logits, -1).cpu().numpy()[0]
print({v: proba[k] for k, v in model.config.id2label.items()})
# {'entailment': 0.009525929, 'contradiction': 0.9332064, 'neutral': 0.05726764}
```
You can also use this model for zero-shot short text classification (by labels only), e.g. for sentiment analysis:
```python
def predict_zero_shot(text, label_texts, model, tokenizer, label='entailment', normalize=True):
label_texts
tokens = tokenizer([text] * len(label_texts), label_texts, truncation=True, return_tensors='pt', padding=True)
with torch.inference_mode():
result = torch.softmax(model(**tokens.to(model.device)).logits, -1)
proba = result[:, model.config.label2id[label]].cpu().numpy()
if normalize:
proba /= sum(proba)
return proba
classes = ['Я доволен', 'Я недоволен']
predict_zero_shot('Какая гадость эта ваша заливная рыба!', classes, model, tokenizer)
# array([0.05609814, 0.9439019 ], dtype=float32)
predict_zero_shot('Какая вкусная эта ваша заливная рыба!', classes, model, tokenizer)
# array([0.9059292 , 0.09407079], dtype=float32)
```
Alternatively, you can use [Huggingface pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) for inference.
## Sources
The model has been trained on a series of NLI datasets automatically translated to Russian from English.
Most datasets were taken [from the repo of Felipe Salvatore](https://github.com/felipessalvatore/NLI_datasets):
[JOCI](https://github.com/sheng-z/JOCI),
[MNLI](https://cims.nyu.edu/~sbowman/multinli/),
[MPE](https://aclanthology.org/I17-1011/),
[SICK](http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf),
[SNLI](https://nlp.stanford.edu/projects/snli/).
Some datasets obtained from the original sources:
[ANLI](https://github.com/facebookresearch/anli),
[NLI-style FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md),
[IMPPRES](https://github.com/facebookresearch/Imppres).
## Performance
The table below shows ROC AUC (one class vs rest) for five models on the corresponding *dev* sets:
- [tiny](https://huggingface.co/cointegrated/rubert-tiny-bilingual-nli): a small BERT predicting entailment vs not_entailment
- [twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway): a base-sized BERT predicting entailment vs not_entailment
- [threeway](https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway) (**this model**): a base-sized BERT predicting entailment vs contradiction vs neutral
- [vicgalle-xlm](https://huggingface.co/vicgalle/xlm-roberta-large-xnli-anli): a large multilingual NLI model
- [facebook-bart](https://huggingface.co/facebook/bart-large-mnli): a large multilingual NLI model
|model |add_one_rte|anli_r1|anli_r2|anli_r3|copa|fever|help|iie |imppres|joci|mnli |monli|mpe |scitail|sick|snli|terra|total |
|------------------------|-----------|-------|-------|-------|----|-----|----|-----|-------|----|-----|-----|----|-------|----|----|-----|------|
|n_observations |387 |1000 |1000 |1200 |200 |20474|3355|31232|7661 |939 |19647|269 |1000|2126 |500 |9831|307 |101128|
|tiny/entailment |0.77 |0.59 |0.52 |0.53 |0.53|0.90 |0.81|0.78 |0.93 |0.81|0.82 |0.91 |0.81|0.78 |0.93|0.95|0.67 |0.77 |
|twoway/entailment |0.89 |0.73 |0.61 |0.62 |0.58|0.96 |0.92|0.87 |0.99 |0.90|0.90 |0.99 |0.91|0.96 |0.97|0.97|0.87 |0.86 |
|threeway/entailment |0.91 |0.75 |0.61 |0.61 |0.57|0.96 |0.56|0.61 |0.99 |0.90|0.91 |0.67 |0.92|0.84 |0.98|0.98|0.90 |0.80 |
|vicgalle-xlm/entailment |0.88 |0.79 |0.63 |0.66 |0.57|0.93 |0.56|0.62 |0.77 |0.80|0.90 |0.70 |0.83|0.84 |0.91|0.93|0.93 |0.78 |
|facebook-bart/entailment|0.51 |0.41 |0.43 |0.47 |0.50|0.74 |0.55|0.57 |0.60 |0.63|0.70 |0.52 |0.56|0.68 |0.67|0.72|0.64 |0.58 |
|threeway/contradiction | |0.71 |0.64 |0.61 | |0.97 | | |1.00 |0.77|0.92 | |0.89| |0.99|0.98| |0.85 |
|threeway/neutral | |0.79 |0.70 |0.62 | |0.91 | | |0.99 |0.68|0.86 | |0.79| |0.96|0.96| |0.83 |
For evaluation (and for training of the [tiny](https://huggingface.co/cointegrated/rubert-tiny-bilingual-nli) and [twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway) models), some extra datasets were used:
[Add-one RTE](https://cs.brown.edu/people/epavlick/papers/ans.pdf),
[CoPA](https://people.ict.usc.edu/~gordon/copa.html),
[IIE](https://aclanthology.org/I17-1100), and
[SCITAIL](https://allenai.org/data/scitail) taken from [the repo of Felipe Salvatore](https://github.com/felipessalvatore/NLI_datasets) and translatted,
[HELP](https://github.com/verypluming/HELP) and [MoNLI](https://github.com/atticusg/MoNLI) taken from the original sources and translated,
and Russian [TERRa](https://russiansuperglue.com/ru/tasks/task_info/TERRa).
|
870 | cointegrated/rubert-base-cased-nli-twoway | [
"entailment",
"not_entailment"
] | ---
language: ru
pipeline_tag: zero-shot-classification
tags:
- rubert
- russian
- nli
- rte
- zero-shot-classification
widget:
- text: "Я хочу поехать в Австралию"
candidate_labels: "спорт,путешествия,музыка,кино,книги,наука,политика"
hypothesis_template: "Тема текста - {}."
---
# RuBERT for NLI (natural language inference)
This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.
For more details, see the card for a similar model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
|
871 | cointegrated/rubert-tiny-bilingual-nli | [
"entailment",
"not_entailment"
] | ---
language: ru
pipeline_tag: zero-shot-classification
tags:
- rubert
- russian
- nli
- rte
- zero-shot-classification
widget:
- text: "Сервис отстойный, кормили невкусно"
candidate_labels: "Мне понравилось, Мне не понравилось"
hypothesis_template: "{}."
---
# RuBERT-tiny for NLI (natural language inference)
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.
For more details, see the card for a related model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
|
872 | cointegrated/rubert-tiny-sentiment-balanced | [
"negative",
"neutral",
"positive"
] | ---
language: ["ru"]
tags:
- russian
- classification
- sentiment
- multiclass
widget:
- text: "Какая гадость эта ваша заливная рыба!"
---
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned for classification of sentiment for short Russian texts.
The problem is formulated as multiclass classification: `negative` vs `neutral` vs `positive`.
## Usage
The function below estimates the sentiment of the given text:
```python
# !pip install transformers sentencepiece --quiet
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_checkpoint = 'cointegrated/rubert-tiny-sentiment-balanced'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
if torch.cuda.is_available():
model.cuda()
def get_sentiment(text, return_type='label'):
""" Calculate sentiment of a text. `return_type` can be 'label', 'score' or 'proba' """
with torch.no_grad():
inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device)
proba = torch.sigmoid(model(**inputs).logits).cpu().numpy()[0]
if return_type == 'label':
return model.config.id2label[proba.argmax()]
elif return_type == 'score':
return proba.dot([-1, 0, 1])
return proba
text = 'Какая гадость эта ваша заливная рыба!'
# classify the text
print(get_sentiment(text, 'label')) # negative
# score the text on the scale from -1 (very negative) to +1 (very positive)
print(get_sentiment(text, 'score')) # -0.5894946306943893
# calculate probabilities of all labels
print(get_sentiment(text, 'proba')) # [0.7870447 0.4947824 0.19755007]
```
## Training
We trained the model on [the datasets collected by Smetanin](https://github.com/sismetanin/sentiment-analysis-in-russian). We have converted all training data into a 3-class format and have up- and downsampled the training data to balance both the sources and the classes. The training code is available as [a Colab notebook](https://gist.github.com/avidale/e678c5478086c1d1adc52a85cb2b93e6). The metrics on the balanced test set are the following:
| Source | Macro F1 |
| ----------- | ----------- |
| SentiRuEval2016_banks | 0.83 |
| SentiRuEval2016_tele | 0.74 |
| kaggle_news | 0.66 |
| linis | 0.50 |
| mokoron | 0.98 |
| rureviews | 0.72 |
| rusentiment | 0.67 |
|
873 | cointegrated/rubert-tiny-toxicity | [
"dangerous",
"insult",
"non-toxic",
"obscenity",
"threat"
] | ---
language: ["ru"]
tags:
- russian
- classification
- toxicity
- multilabel
widget:
- text: "Иди ты нафиг!"
---
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned for classification of toxicity and inappropriateness for short informal Russian texts, such as comments in social networks.
The problem is formulated as multilabel classification with the following classes:
- `non-toxic`: the text does NOT contain insults, obscenities, and threats, in the sense of the [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) competition.
- `insult`
- `obscenity`
- `threat`
- `dangerous`: the text is inappropriate, in the sense of [Babakov et.al.](https://arxiv.org/abs/2103.05345), i.e. it can harm the reputation of the speaker.
A text can be considered safe if it is BOTH `non-toxic` and NOT `dangerous`.
## Usage
The function below estimates the probability that the text is either toxic OR dangerous:
```python
# !pip install transformers sentencepiece --quiet
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_checkpoint = 'cointegrated/rubert-tiny-toxicity'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
if torch.cuda.is_available():
model.cuda()
def text2toxicity(text, aggregate=True):
""" Calculate toxicity of a text (if aggregate=True) or a vector of toxicity aspects (if aggregate=False)"""
with torch.no_grad():
inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device)
proba = torch.sigmoid(model(**inputs).logits).cpu().numpy()
if isinstance(text, str):
proba = proba[0]
if aggregate:
return 1 - proba.T[0] * (1 - proba.T[-1])
return proba
print(text2toxicity('я люблю нигеров', True))
# 0.9350118728093193
print(text2toxicity('я люблю нигеров', False))
# [0.9715758 0.0180863 0.0045551 0.00189755 0.9331106 ]
print(text2toxicity(['я люблю нигеров', 'я люблю африканцев'], True))
# [0.93501186 0.04156357]
print(text2toxicity(['я люблю нигеров', 'я люблю африканцев'], False))
# [[9.7157580e-01 1.8086294e-02 4.5550885e-03 1.8975559e-03 9.3311059e-01]
# [9.9979788e-01 1.9048342e-04 1.5297388e-04 1.7452303e-04 4.1369814e-02]]
```
## Training
The model has been trained on the joint dataset of [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) and [Babakov et.al.](https://arxiv.org/abs/2103.05345) with `Adam` optimizer, the learning rate of `1e-5`, and batch size of `64` for `15` epochs. A text was considered inappropriate if its inappropriateness score was higher than 0.8, and appropriate - if it was lower than 0.2. The per-label ROC AUC on the dev set is:
```
non-toxic : 0.9937
insult : 0.9912
obscenity : 0.9881
threat : 0.9910
dangerous : 0.8295
``` |
874 | cointegrated/rubert-tiny2-cedr-emotion-detection | [
"anger",
"fear",
"joy",
"no_emotion",
"sadness",
"surprise"
] | ---
language: ["ru"]
tags:
- russian
- classification
- sentiment
- emotion-classification
- multiclass
datasets:
- cedr
widget:
- text: "Бесишь меня, падла"
- text: "Как здорово, что все мы здесь сегодня собрались"
- text: "Как-то стрёмно, давай свалим отсюда?"
- text: "Грусть-тоска меня съедает"
- text: "Данный фрагмент текста не содержит абсолютно никаких эмоций"
- text: "Нифига себе, неужели так тоже бывает!"
---
This is the [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) model fine-tuned for classification of emotions in Russian sentences. The task is multilabel classification, because one sentence can contain multiple emotions.
The model on the [CEDR dataset](https://huggingface.co/datasets/cedr) described in the paper ["Data-Driven Model for Emotion Detection in Russian Texts"](https://doi.org/10.1016/j.procs.2021.06.075) by Sboev et al.
The model has been trained with Adam optimizer for 40 epochs with learning rate `1e-5` and batch size 64 [in this notebook](https://colab.research.google.com/drive/1AFW70EJaBn7KZKRClDIdDUpbD46cEsat?usp=sharing).
The quality of the predicted probabilities on the test dataset is the following:
| label | no emotion | joy |sadness |surprise| fear |anger | mean | mean (emotions) |
|----------|------------|--------|--------|--------|--------|--------| --------| ----------------|
| AUC | 0.9286 | 0.9512 | 0.9564 | 0.8908 | 0.8955 | 0.7511 | 0.8956 | 0.8890 |
| F1 micro | 0.8624 | 0.9389 | 0.9362 | 0.9469 | 0.9575 | 0.9261 | 0.9280 | 0.9411 |
| F1 macro | 0.8562 | 0.8962 | 0.9017 | 0.8366 | 0.8359 | 0.6820 | 0.8348 | 0.8305 |
|
875 | coldfir3/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9222116474112371
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8262 | 1.0 | 250 | 0.3073 | 0.904 | 0.9021 |
| 0.2484 | 2.0 | 500 | 0.2175 | 0.922 | 0.9222 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
877 | coppercitylabs/uzbek-news-category-classifier | [
"дунё",
"жамият",
"жиноят",
"иқтисодиёт",
"маданият",
"реклама",
"саломатлик",
"сиёсат",
"спорт",
"фан ва техника",
"шоу-бизнес"
] | ---
language: uz
tags:
- uzbek
- cyrillic
- news category classifier
license: mit
datasets:
- webcrawl
---
# Uzbek news category classifier (based on UzBERT)
UzBERT fine-tuned to classify news articles into one of the following
categories:
- дунё
- жамият
- жиноят
- иқтисодиёт
- маданият
- реклама
- саломатлик
- сиёсат
- спорт
- фан ва техника
- шоу-бизнес
## How to use
```python
>>> from transformers import pipeline
>>> classifier = pipeline('text-classification', model='coppercitylabs/uzbek-news-category-classifier')
>>> text = """Маҳоратли пара-енгил атлетикачимиз Ҳусниддин Норбеков Токио-2020 Паралимпия ўйинларида ғалаба қозониб, делегациямиз ҳисобига навбатдаги олтин медални келтирди. Бу ҳақда МОҚ хабар берди.
Норбеков ҳозиргина ядро улоқтириш дастурида ўз ғалабасини тантана қилди. Ушбу машқда вакилимиз 16:13 метр натижа билан энг яхши кўрсаткични қайд этди.
Шу тариқа, делегациямиз ҳисобидаги медаллар сони 16 (6 та олтин, 4 та кумуш ва 6 та бронза) тага етди. Кейинги кун дастурларида иштирок этадиган ҳамюртларимизга омад тилаб қоламиз!"""
>>> classifier(text)
[{'label': 'спорт', 'score': 0.9865401983261108}]
```
## Fine-tuning data
Fine-tuned on ~60K news articles for 3 epochs.
|
878 | cross-encoder/ms-marco-MiniLM-L-12-v2 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
879 | cross-encoder/ms-marco-MiniLM-L-2-v2 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
880 | cross-encoder/ms-marco-MiniLM-L-4-v2 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
881 | cross-encoder/ms-marco-MiniLM-L-6-v2 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
882 | cross-encoder/ms-marco-TinyBERT-L-2-v2 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
883 | cross-encoder/ms-marco-TinyBERT-L-2 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
884 | cross-encoder/ms-marco-TinyBERT-L-4 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
885 | cross-encoder/ms-marco-TinyBERT-L-6 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
886 | cross-encoder/ms-marco-electra-base | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
887 | cross-encoder/msmarco-MiniLM-L12-en-de-v1 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS MARCO - EN-DE
This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html).
The training code is available in this repository, see `train_script.py`.
## Usage with SentenceTransformers
When you have [SentenceTransformers](https://www.sbert.net/) installed, you can use the model like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
query = 'How many people live in Berlin?'
docs = ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.']
pairs = [(query, doc) for doc in docs]
scores = model.predict(pairs)
```
## Usage with Transformers
With the transformers library, you can use the model like this:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Performance
The performance was evaluated on three datasets:
- **TREC-DL19 EN-EN**: The original [TREC 2019 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning-2019.html): Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47.
- **TREC-DL19 DE-EN**: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10.
- **GermanDPR DE-DE**: The [GermanDPR](https://www.deepset.ai/germanquad) dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27.
We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search.
| Model-Name | TREC-DL19 EN-EN | TREC-DL19 DE-EN | GermanDPR DE-DE | Docs / Sec |
| ------------- |:-------------:| :-----: | :---: | :----: |
| BM25 | 45.46 | - | 35.85 | -|
| **Cross-Encoder Re-Rankers** | | | |
| [cross-encoder/msmarco-MiniLM-L6-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L6-en-de-v1) | 72.43 | 65.53 | 46.77 | 1600 |
| [cross-encoder/msmarco-MiniLM-L12-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L12-en-de-v1) | 72.94 | 66.07 | 49.91 | 900 |
| [svalabs/cross-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/cross-electra-ms-marco-german-uncased) (DE only) | - | - | 53.67 | 260 |
| [deepset/gbert-base-germandpr-reranking](https://huggingface.co/deepset/gbert-base-germandpr-reranking) (DE only) | - | - | 53.59 | 260 |
| **Bi-Encoders (re-ranking)** | | | |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned) | 63.38 | 58.28 | 37.88 | 940 |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch) | 65.51 | 58.69 | 38.32 | 940 |
| [svalabs/bi-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/bi-electra-ms-marco-german-uncased) (DE only) | - | - | 34.31 | 450 |
| [deepset/gbert-base-germandpr-question_encoder](https://huggingface.co/deepset/gbert-base-germandpr-question_encoder) (DE only) | - | - | 42.55 | 450 |
Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
|
888 | cross-encoder/msmarco-MiniLM-L6-en-de-v1 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS MARCO - EN-DE
This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html).
The training code is available in this repository, see `train_script.py`.
## Usage with SentenceTransformers
When you have [SentenceTransformers](https://www.sbert.net/) installed, you can use the model like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
query = 'How many people live in Berlin?'
docs = ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.']
pairs = [(query, doc) for doc in docs]
scores = model.predict(pairs)
```
## Usage with Transformers
With the transformers library, you can use the model like this:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Performance
The performance was evaluated on three datasets:
- **TREC-DL19 EN-EN**: The original [TREC 2019 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning-2019.html): Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47.
- **TREC-DL19 DE-EN**: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10.
- **GermanDPR DE-DE**: The [GermanDPR](https://www.deepset.ai/germanquad) dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27.
We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search.
| Model-Name | TREC-DL19 EN-EN | TREC-DL19 DE-EN | GermanDPR DE-DE | Docs / Sec |
| ------------- |:-------------:| :-----: | :---: | :----: |
| BM25 | 45.46 | - | 35.85 | -|
| **Cross-Encoder Re-Rankers** | | | |
| [cross-encoder/msmarco-MiniLM-L6-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L6-en-de-v1) | 72.43 | 65.53 | 46.77 | 1600 |
| [cross-encoder/msmarco-MiniLM-L12-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L12-en-de-v1) | 72.94 | 66.07 | 49.91 | 900 |
| [svalabs/cross-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/cross-electra-ms-marco-german-uncased) (DE only) | - | - | 53.67 | 260 |
| [deepset/gbert-base-germandpr-reranking](https://huggingface.co/deepset/gbert-base-germandpr-reranking) (DE only) | - | - | 53.59 | 260 |
| **Bi-Encoders (re-ranking)** | | | |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned) | 63.38 | 58.28 | 37.88 | 940 |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch) | 65.51 | 58.69 | 38.32 | 940 |
| [svalabs/bi-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/bi-electra-ms-marco-german-uncased) (DE only) | - | - | 34.31 | 450 |
| [deepset/gbert-base-germandpr-question_encoder](https://huggingface.co/deepset/gbert-base-germandpr-question_encoder) (DE only) | - | - | 42.55 | 450 |
Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
|
889 | cross-encoder/nli-MiniLM2-L6-H768 | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
pipeline_tag: zero-shot-classification
license: apache-2.0
tags:
- MiniLMv2
datasets:
- multi_nli
- snli
metrics:
- accuracy
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-MiniLM2-L6-H768')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-MiniLM2-L6-H768')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-MiniLM2-L6-H768')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-MiniLM2-L6-H768')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
890 | cross-encoder/nli-deberta-base | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- deberta-base-base
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-base')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-base')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-base')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-base')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
891 | cross-encoder/nli-deberta-v3-base | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-base
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 92.38
- Accuracy on MNLI mismatched set: 90.04
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-base')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-base')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-base')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-base')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
892 | cross-encoder/nli-deberta-v3-large | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-large
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 92.20
- Accuracy on MNLI mismatched set: 90.49
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-large')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-large')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-large')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-large')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
893 | cross-encoder/nli-deberta-v3-small | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-small
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 91.65
- Accuracy on MNLI mismatched set: 87.55
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-small')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-small')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-small')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-small')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
894 | cross-encoder/nli-deberta-v3-xsmall | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-xsmall
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 91.64
- Accuracy on MNLI mismatched set: 87.77
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-xsmall')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-xsmall')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-xsmall')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-xsmall')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
895 | cross-encoder/nli-distilroberta-base | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- distilroberta-base
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-distilroberta-base')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-distilroberta-base')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-distilroberta-base')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-distilroberta-base')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
896 | cross-encoder/nli-roberta-base | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- roberta-base
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-roberta-base')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-roberta-base')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-roberta-base')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-roberta-base')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
897 | cross-encoder/qnli-distilroberta-base | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the [GLUE QNLI](https://arxiv.org/abs/1804.07461) dataset, which transformed the [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/) into an NLI task.
## Performance
For performance results of this model, see [SBERT.net Pre-trained Cross-Encoder][https://www.sbert.net/docs/pretrained_cross-encoders.html].
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Query1', 'Paragraph1'), ('Query2', 'Paragraph2')])
#e.g.
scores = model.predict([('How many people live in Berlin?', 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'), ('What is the size of New York?', 'New York City is famous for the Metropolitan Museum of Art.')])
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'What is the size of New York?'], ['Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = torch.nn.functional.sigmoid(model(**features).logits)
print(scores)
``` |
898 | cross-encoder/qnli-electra-base | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the [GLUE QNLI](https://arxiv.org/abs/1804.07461) dataset, which transformed the [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/) into an NLI task.
## Performance
For performance results of this model, see [SBERT.net Pre-trained Cross-Encoder][https://www.sbert.net/docs/pretrained_cross-encoders.html].
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Query1', 'Paragraph1'), ('Query2', 'Paragraph2')])
#e.g.
scores = model.predict([('How many people live in Berlin?', 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'), ('What is the size of New York?', 'New York City is famous for the Metropolitan Museum of Art.')])
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'What is the size of New York?'], ['Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = torch.nn.functional.sigmoid(model(**features).logits)
print(scores)
``` |
899 | cross-encoder/quora-distilroberta-base | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
Pre-trained models can be used like this:
```
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
```
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class |
900 | cross-encoder/quora-roberta-base | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
Pre-trained models can be used like this:
```
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
```
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class |
901 | cross-encoder/quora-roberta-large | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
Pre-trained models can be used like this:
```
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
```
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class |
902 | cross-encoder/stsb-TinyBERT-L-4 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
## Usage and Performance
Pre-trained models can be used like this:
```
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
```
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`.
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class |
903 | cross-encoder/stsb-distilroberta-base | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
## Usage and Performance
Pre-trained models can be used like this:
```
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
```
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`.
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class |
904 | cross-encoder/stsb-roberta-base | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
## Usage and Performance
Pre-trained models can be used like this:
```
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
```
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`.
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class |
905 | cross-encoder/stsb-roberta-large | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
## Usage and Performance
Pre-trained models can be used like this:
```
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
```
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`.
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class |
907 | cscottp27/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9232542847906783
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8352 | 1.0 | 250 | 0.3079 | 0.91 | 0.9086 |
| 0.247 | 2.0 | 500 | 0.2175 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
908 | cvcio/mediawatch-el-topics | [
"AFFAIRS",
"AGRICULTURE",
"ARTS_AND_CULTURE",
"BREAKING_NEWS",
"BUSINESS",
"COVID",
"CRIME",
"ECONOMY",
"EDUCATION",
"ELECTIONS",
"ENTERTAINMENT",
"ENVIRONMENT",
"FOOD",
"HEALTH",
"INTERNATIONAL",
"JUSTICE",
"LAW_AND_ORDER",
"MILITARY",
"NON_PAPER",
"OPINION",
"POLITICS",
"... | ---
language: el
license: gpl-3.0
tags:
- roberta
- Greek
- news
- transformers
- text-classification
pipeline_tag: text-classification
model-index:
- name: mediawatch-el-topics
results:
- task:
type: text-classification
name: Multi Label Text Classification
metrics:
- type: roc_auc
value: 98.55
name: ROCAUC
- type: eval_AFFAIRS
value: 98.72
name: AFFAIRS
- type: eval_AGRICULTURE
value: 97.99
name: AGRICULTURE
- type: eval_ARTS_AND_CULTURE
value: 98.38
name: ARTS_AND_CULTURE
- type: eval_BREAKING_NEWS
value: 96.75
name: BREAKING_NEWS
- type: eval_BUSINESS
value: 98.11
name: BUSINESS
- type: eval_COVID
value: 96.2
name: COVID
- type: eval_CRIME
value: 98.85
name: CRIME
- type: eval_ECONOMY
value: 97.65
name: ECONOMY
- type: eval_EDUCATION
value: 98.65
name: EDUCATION
- type: eval_ELECTIONS
value: 99.4
name: ELECTIONS
- type: eval_ENTERTAINMENT
value: 99.25
name: ENTERTAINMENT
- type: eval_ENVIRONMENT
value: 98.47
name: ENVIRONMENT
- type: eval_FOOD
value: 99.34
name: FOOD
- type: eval_HEALTH
value: 97.23
name: HEALTH
- type: eval_INTERNATIONAL
value: 96.24
name: INTERNATIONAL
- type: eval_JUSTICE
value: 98.62
name: JUSTICE
- type: eval_LAW_AND_ORDER
value: 91.77
name: LAW_AND_ORDER
- type: eval_MILITARY
value: 98.38
name: MILITARY
- type: eval_NON_PAPER
value: 95.95
name: NON_PAPER
- type: eval_OPINION
value: 96.24
name: OPINION
- type: eval_POLITICS
value: 97.73
name: POLITICS
- type: eval_REFUGEE
value: 99.49
name: REFUGEE
- type: eval_REGIONAL
value: 95.2
name: REGIONAL
- type: eval_RELIGION
value: 99.22
name: RELIGION
- type: eval_SCIENCE
value: 98.37
name: SCIENCE
- type: eval_SOCIAL_MEDIA
value: 99.1
name: SOCIAL_MEDIA
- type: eval_SOCIETY
value: 94.39
name: SOCIETY
- type: eval_SPORTS
value: 99.39
name: SPORTS
- type: eval_TECH
value: 99.23
name: TECH
- type: eval_TOURISM
value: 99.0
name: TOURISM
- type: eval_TRANSPORT
value: 98.79
name: TRANSPORT
- type: eval_TRAVEL
value: 98.32
name: TRAVEL
- type: eval_WEATHER
value: 99.5
name: WEATHER
widget:
- text: "Παρ’ ολίγον «θερμό» επεισόδιο τουρκικού πολεμικού πλοίου με ελληνικό ωκεανογραφικό στην περιοχή μεταξύ Ρόδου και Καστελόριζου, στο διάστημα 20-23 Σεπτεμβρίου, αποκάλυψε το ΟΡΕΝ. Σύμφωνα με πληροφορίες που μετέδωσε το κεντρικό δελτίο ειδήσεων, όταν το ελληνικό ερευνητικό « ΑΙΓΑΙΟ » που ανήκει στο Ελληνικό Κέντρο Θαλασσίων Ερευνών βγήκε έξω από τα 6 ν.μ, σε διεθνή ύδατα, το προσέγγισε τουρκικό πολεμικό πλοίο, ο κυβερνήτης του οποίου ζήτησε δύο φορές μέσω ασυρμάτου να ενημερωθεί για τα στοιχεία του πλοίου, αλλά και για την αποστολή του. Ο πλοίαρχος του ελληνικού ερευνητικού δεν απάντησε και τελικά το τουρκικό πολεμικό απομακρύνθηκε."
example_title: Topic AFFAIRS
- text: "Η κυβερνητική ανικανότητα οδηγεί την χώρα στο χάος. Η κυβερνηση Μητσοτακη αδυνατεί να διαχειριστεί την πανδημία. Δεν μπορει ούτε να πείσει τον κόσμο να εμβολιαστεί, που ήταν το πιο απλο πράγμα. Σημερα λοιπόν φτάσαμε στο σημείο να μιλάμε για επαναφορά της χρήσης μάσκας σε εξωτερικούς χώρους ακόμη και όπου δεν υπάρχει συγχρωτισμός. Στις συζητήσεις των ειδικών θα βρεθεί επίσης το ενδεχόμενο για τοπικά lockdown σε περιοχές με βαρύ ιικό φορτίο για να μην ξεφύγει η κατάσταση, ενώ θα χρειάζεται κάποιος για τις μετακινήσεις του είτε πιστοποιητικό εμβολιασμού ή νόσησης και οι ανεμβολίαστοι rapid ή μοριακό τεστ."
example_title: Topic COVID
- text: "Η «ωραία Ελένη» επέστρεψε στην τηλεόραση, μέσα από τη συχνότητα του MEGA και άφησε τις καλύτερες εντυπώσεις. Το πλατό από το οποίο εμφανίζεται η Ελένη Μενεγάκη έχει φτιαχτεί από την αρχή για την εκπομπή της. Σήμερα, στο κλείσιμο της εκπομπής η Ελένη πέρασε ανάμεσα από τις κάμερες για να μπει στο καμαρίνι της «Μην τρομοκρατείστε, είμαι η Ελένη Μενεγάκη, τα κάνω αυτά. Με συγχωρείται, έχω ψυχολογικά αν δεν είμαι ελεύθερη» είπε αρχικά η παρουσιάστρια στους συνεργάτες της και πρόσθεσε στη συνέχεια: «Η Ελένη ολοκλήρωσε. Μπορείτε να συνεχίσετε με το υπόλοιπο πρόγραμμα του Mega. Εγώ ανοίγω το καμαρίνι, αν με αφήσουν. Μπαίνω καμαρίνι». Δείτε το απόσπασμα!"
example_title: Topic ENTERTAINMENT
- text: "Ένα εξαιρετικά ενδιαφέρον «κουτσομπολιό» εντόπισαν οι κεραίες της στήλης πέριξ του Μεγάρου Μαξίμου : το κατά πόσον, δηλαδή, ο «εξ απορρήτων» του Κυριάκου Μητσοτάκη , Γιώργος Γεραπετρίτης μετέχει στη διαχείριση της πανδημίας και στην διαδικασία λήψης αποφάσεων. Το εν λόγω «κουτσομπολιό» πυροδότησε το γεγονός ότι σε σαββατιάτικη εφημερίδα δημοσιεύθηκαν προχθές δηλώσεις του υπουργού Επικρατείας με τις οποίες απέκλειε κάθε σενάριο νέων οριζόντιων μέτρων και την ίδια ώρα, το Μαξίμου ανήγγελλε… καραντίνα στη Μύκονο. «Είναι αυτονόητο ότι η κοινωνία και η οικονομία δεν αντέχουν οριζόντιους περιορισμούς», έλεγε χαρακτηριστικά ο Γεραπετρίτης, την ώρα που η κυβέρνηση ανακοίνωνε… αυτούς τους οριζόντιους περιορισμούς. Ως εκ τούτων, δύο τινά μπορεί να συμβαίνουν: είτε ο υπουργός Επικρατείας δεν μετέχει πλέον στη λήψη των αποφάσεων, είτε η απόφαση για οριζόντια μέτρα ελήφθη υπό το κράτος πανικού το πρωί του Σαββάτου, όταν έφτασε στο Μαξίμου η τελευταία «φουρνιά» των επιδημιολογικών δεδομένων για το νησί των ανέμων…"
example_title: Topic NON_PAPER
- text: "Είναι ξεκάθαρο ότι μετά το πλήγμα που δέχθηκε η κυβέρνησή του από τις αδυναμίες στην αντιμετώπιση των καταστροφικών πυρκαγιών το μεγάλο στοίχημα για τον Κυριάκο Μητσοτάκη είναι να προχωρήσει συντεταγμένα και χωρίς παρατράγουδα ο σχεδιασμός για την αποκατάσταση των ζημιών. Ο Πρωθυπουργός έχει ήδη φτιάξει μια ομάδα κρούσης την οποία αποτελούν 9 υπουργοί. Τα μέλη που απαρτίζουν την ομάδα κρούσης και τα οποία βρίσκονται σε συνεχή, καθημερινή επαφή με τον Κυριάκο Μητσοτάκη είναι, όπως μας πληροφορεί η στήλη «Θεωρείο» της «Καθημερινής» είναι οι: Γ. Γεραπετρίτης, Α. Σκέρτσος, Χρ. Τριαντόπουλος, Κ. Καραμανλής, Κ. Σκρέκας, Στ. Πέτσας, Σπ. Λιβανός και φυσικά οι Χρ. Σταικούρας και Θ. Σκυλακάκης."
example_title: Topic OPINION
---
**Disclaimer**: *This model is still under testing and may change in the future, we will try to keep backwards compatibility. For any questions reach us at info@cvcio.org*
# MediaWatch News Topics (Greek)
Fine-tuned model for multi-label text-classification (SequenceClassification), based on [roberta-el-news](https://huggingface.co/cvcio/roberta-el-news), using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is to classify news in real-time on upto 33 topics including: *AFFAIRS*, *AGRICULTURE*, *ARTS_AND_CULTURE*, *BREAKING_NEWS*, *BUSINESS*, *COVID*, *ECONOMY*, *EDUCATION*, *ELECTIONS*, *ENTERTAINMENT*, *ENVIRONMENT*, *FOOD*, *HEALTH*, *INTERNATIONAL*, *LAW_AND_ORDER*, *MILITARY*, *NON_PAPER*, *OPINION*, *POLITICS*, *REFUGEE*, *REGIONAL*, *RELIGION*, *SCIENCE*, *SOCIAL_MEDIA*, *SOCIETY*, *SPORTS*, *TECH*, *TOURISM*, *TRANSPORT*, *TRAVEL*, *WEATHER*, *CRIME*, *JUSTICE*.
## How to use
You can use this model directly with a pipeline for text-classification:
```python
from transformers import pipeline
pipe = pipeline(
task="text-classification",
model="cvcio/mediawatch-el-topics",
tokenizer="cvcio/roberta-el-news" # or cvcio/mediawatch-el-topics
)
topics = pipe(
"Η βιασύνη αρκετών χωρών να άρουν τους περιορισμούς κατά του κορονοϊού, "+
"αν όχι να κηρύξουν το τέλος της πανδημίας, με το σκεπτικό ότι έφτασε "+
"πλέον η ώρα να συμβιώσουμε με την Covid-19, έχει κάνει μερικούς πιο "+
"επιφυλακτικούς επιστήμονες να προειδοποιούν ότι πρόκειται μάλλον "+
"για «ενδημική αυταπάτη» και ότι είναι πρόωρη τέτοια υπερβολική "+
"χαλάρωση. Καθώς τα κρούσματα της Covid-19, μετά το αιφνιδιαστικό "+
"μαζικό κύμα της παραλλαγής Όμικρον, εμφανίζουν τάση υποχώρησης σε "+
"Ευρώπη και Βόρεια Αμερική, όπου περισσεύει η κόπωση μεταξύ των "+
"πολιτών μετά από δύο χρόνια πανδημίας, ειδικοί και μη αδημονούν να "+
"«ξεμπερδέψουν» με τον κορονοϊό.",
padding=True,
truncation=True,
max_length=512,
return_all_scores=True
)
print(topics)
# outputs
[
[
{'label': 'AFFAIRS', 'score': 0.0018806682201102376},
{'label': 'AGRICULTURE', 'score': 0.00014653144171461463},
{'label': 'ARTS_AND_CULTURE', 'score': 0.0012948638759553432},
{'label': 'BREAKING_NEWS', 'score': 0.0001729220530251041},
{'label': 'BUSINESS', 'score': 0.0028276608791202307},
{'label': 'COVID', 'score': 0.4407998025417328},
{'label': 'ECONOMY', 'score': 0.039826102554798126},
{'label': 'EDUCATION', 'score': 0.0019098613411188126},
{'label': 'ELECTIONS', 'score': 0.0003333651984576136},
{'label': 'ENTERTAINMENT', 'score': 0.004249618388712406},
{'label': 'ENVIRONMENT', 'score': 0.0015828514005988836},
{'label': 'FOOD', 'score': 0.0018390495097264647},
{'label': 'HEALTH', 'score': 0.1204477995634079},
{'label': 'INTERNATIONAL', 'score': 0.25892165303230286},
{'label': 'LAW_AND_ORDER', 'score': 0.07646272331476212},
{'label': 'MILITARY', 'score': 0.00033025629818439484},
{'label': 'NON_PAPER', 'score': 0.011991199105978012},
{'label': 'OPINION', 'score': 0.16166265308856964},
{'label': 'POLITICS', 'score': 0.0008890336030162871},
{'label': 'REFUGEE', 'score': 0.0011504743015393615},
{'label': 'REGIONAL', 'score': 0.0008734092116355896},
{'label': 'RELIGION', 'score': 0.0009001944563351572},
{'label': 'SCIENCE', 'score': 0.05075162276625633},
{'label': 'SOCIAL_MEDIA', 'score': 0.00039615994319319725},
{'label': 'SOCIETY', 'score': 0.0043518817983567715},
{'label': 'SPORTS', 'score': 0.002416545059531927},
{'label': 'TECH', 'score': 0.0007818648009561002},
{'label': 'TOURISM', 'score': 0.011870541609823704},
{'label': 'TRANSPORT', 'score': 0.0009422845905646682},
{'label': 'TRAVEL', 'score': 0.03004464879631996},
{'label': 'WEATHER', 'score': 0.00040286066359840333},
{'label': 'CRIME', 'score': 0.0005416403291746974},
{'label': 'JUSTICE', 'score': 0.000990519649349153}
]
]
```
## Labels
All labels, except *NON_PAPER*, retrieved by source articles during the data collection step, without any preprocessing, assuming that journalists and newsrooms assign correct tags to the articles. We disregarded all articles with more than 6 tags to reduce bias and tag manipulation.
| label | roc_auc | samples |
|-------:|--------:|--------:|
| AFFAIRS | 0.9872 | 6,314 |
| AGRICULTURE | 0.9799 | 1,254 |
| ARTS_AND_CULTURE | 0.9838 | 15,968 |
| BREAKING_NEWS | 0.9675 | 827 |
| BUSINESS | 0.9811 | 6,507 |
| COVID | 0.9620 | 50,000 |
| CRIME | 0.9885 | 34,421 |
| ECONOMY | 0.9765 | 45,474 |
| EDUCATION | 0.9865 | 10,111 |
| ELECTIONS | 0.9940 | 7,571 |
| ENTERTAINMENT | 0.9925 | 23,323 |
| ENVIRONMENT | 0.9847 | 23,060 |
| FOOD | 0.9934 | 3,712 |
| HEALTH | 0.9723 | 16,852 |
| INTERNATIONAL | 0.9624 | 50,000 |
| JUSTICE | 0.9862 | 4,860 |
| LAW_AND_ORDER | 0.9177 | 50,000 |
| MILITARY | 0.9838 | 6,536 |
| NON_PAPER | 0.9595 | 4,589 |
| OPINION | 0.9624 | 6,296 |
| POLITICS | 0.9773 | 50,000 |
| REFUGEE | 0.9949 | 4,536 |
| REGIONAL | 0.9520 | 50,000 |
| RELIGION | 0.9922 | 11,533 |
| SCIENCE | 0.9837 | 1,998 |
| SOCIAL_MEDIA | 0.991 | 6,212 |
| SOCIETY | 0.9439 | 50,000 |
| SPORTS | 0.9939 | 31,396 |
| TECH | 0.9923 | 8,225 |
| TOURISM | 0.9900 | 8,081 |
| TRANSPORT | 0.9879 | 3,211 |
| TRAVEL | 0.9832 | 4,638 |
| WEATHER | 0.9950 | 19,931 |
| loss | 0.0533 | - |
| roc_auc | 0.9855 | - |
## Pretraining
The model was pretrained using an NVIDIA A10 GPU for 15 epochs (~ approx 59K steps, 8 hours training) with a batch size of 128. The optimizer used is Adam with a learning rate of 1e-5, and weight decay 0.01. We used roc_auc_micro to evaluate the results.
### Framework versions
- Transformers 4.13.0
- Pytorch 1.9.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
## Authors
Dimitris Papaevagelou - [@andefined](https://github.com/andefined)
## About Us
[Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest. |
909 | d4data/bias-detection-model | [
"Non-biased",
"Biased"
] | ---
language:
- en
tags:
- Text Classification
co2_eq_emissions: 0.319355
widget:
- text: "Nevertheless, Trump and other Republicans have tarred the protests as havens for terrorists intent on destroying property."
example_title: "Biased example 1"
- text: "Billie Eilish issues apology for mouthing an anti-Asian derogatory term in a resurfaced video."
example_title: "Biased example 2"
- text: "Christians should make clear that the perpetuation of objectionable vaccines and the lack of alternatives is a kind of coercion."
example_title: "Biased example 3"
- text: "There have been a protest by a group of people"
example_title: "Non-Biased example 1"
- text: "While emphasizing he’s not singling out either party, Cohen warned about the danger of normalizing white supremacist ideology."
example_title: "Non-Biased example 2"
---
## About the Model
An English sequence classification model, trained on MBAD Dataset to detect bias and fairness in sentences (news articles). This model was built on top of distilbert-base-uncased model and trained for 30 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512.
- Dataset : MBAD Data
- Carbon emission 0.319355 Kg
| Train Accuracy | Validation Accuracy | Train loss | Test loss |
|---------------:| -------------------:| ----------:|----------:|
| 76.97 | 62.00 | 0.45 | 0.96 |
## Usage
The easiest way is to load the inference api from huggingface and second method is through the pipeline object offered by transformers library.
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("d4data/bias-detection-model")
model = TFAutoModelForSequenceClassification.from_pretrained("d4data/bias-detection-model")
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability
classifier("The irony, of course, is that the exhibit that invites people to throw trash at vacuuming Ivanka Trump lookalike reflects every stereotype feminists claim to stand against, oversexualizing Ivanka’s body and ignoring her hard work.")
```
## Author
This model is part of the Research topic "Bias and Fairness in AI" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at:
> Bias & Fairness in AI, (2022), GitHub repository, <https://github.com/dreji18/Fairness-in-AI>
|
910 | d4data/environmental-due-diligence-model | [
"Contaminants",
"Remediation Standards",
"Geology",
"Contaminated media",
"Remediation Goals",
"Source of contamination",
"Extent of Contamination",
"Extent of contamination",
"Groundwater-Surfacewater interaction",
"Remediation Activities",
"Depth to Water",
"GW Velocity"
] | ---
language:
- en
tags:
- Text Classification
co2_eq_emissions: 0.1069
widget:
- text: "At the every month post-injection monitoring event, TCE, carbon tetrachloride, and chloroform concentrations were above CBSGs in three of the wells"
example_title: "Remediation Standards"
- text: "TRPH exceedances were observed in the subsurface soils immediately above the water table and there are no TRPH exceedances in surface soils."
example_title: "Extent of Contamination"
- text: "weathered shale was encountered below the surface area with fluvial deposits. Sediments in the coastal plain region are found above and below the bedrock with sandstones and shales that form the basement rock"
example_title: "Geology"
---
## About the Model
An Environmental due diligence classification model, trained on customized environmental Dataset to detect contamination and remediation activities (both prevailing as well as planned) as a part of site assessment process. This model can identify the source of contamination, the extent of contamination, the types of contaminants present at the site, the flow of contaminants and their interaction with ground water, surface water and other surrounding water bodies .
This model was built on top of distilbert-base-uncased model and trained for 10 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512.
- Dataset : Open Source News data + Custom data
- Carbon emission 0.1069 Kg
## Usage
The easiest way is to load through the pipeline object offered by transformers library.
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("d4data/environmental-due-diligence-model")
model = TFAutoModelForSequenceClassification.from_pretrained("d4data/environmental-due-diligence-model")
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability
classifier("At the every month post-injection monitoring event, TCE, carbon tetrachloride, and chloroform concentrations were above CBSGs in three of the wells")
```
## Author
This model is part of the Research topic "Environmental Due Diligence" conducted by Deepak John Reji, Afreen Aman. If you use this work (code, model or dataset), please cite as:
> Environmental Due Diligence, (2020), https://www.sciencedirect.com/science/article/pii/S2665963822001117
|
911 | d4niel92/xlm-roberta-base-finetuned-marc-en | [
"good",
"great",
"ok",
"poor",
"terrible"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8976
- Mae: 0.4268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.092 | 1.0 | 235 | 0.9514 | 0.5122 |
| 0.9509 | 2.0 | 470 | 0.8976 | 0.4268 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
912 | daekeun-ml/koelectra-small-v3-korsts | [
"LABEL_0"
] | ---
language:
- ko
pipeline_tag: sentence-similarity
tags:
- sentence-similarity
- transformers
license: cc-by-4.0
datasets:
- korsts
metrics:
- accuracy
- f1
- precision
- recall
---
# Similarity between two sentences (fine-tuning with KoELECTRA-Small-v3 model and KorSTS dataset)
## Usage (Amazon SageMaker inference applicable)
It uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint.
### inference_korsts.py
```python
import json
import sys
import logging
import torch
from torch import nn
from transformers import ElectraConfig
from transformers import ElectraModel, AutoTokenizer, ElectraTokenizer, ElectraForSequenceClassification
logging.basicConfig(
level=logging.INFO,
format='[{%(filename)s:%(lineno)d} %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(filename='tmp.log'),
logging.StreamHandler(sys.stdout)
]
)
logger = logging.getLogger(__name__)
max_seq_length = 128
tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/koelectra-small-v3-korsts")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator'
def model_fn(model_path):
####
# If you have your own trained model
# Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator'
####
#config = ElectraConfig.from_json_file(f'{model_path}/config.json')
#model = ElectraForSequenceClassification.from_pretrained(f'{model_path}/model.pth', config=config)
model = ElectraForSequenceClassification.from_pretrained('daekeun-ml/koelectra-small-v3-korsts')
model.to(device)
return model
def input_fn(input_data, content_type="application/jsonlines"):
data_str = input_data.decode("utf-8")
jsonlines = data_str.split("\n")
transformed_inputs = []
for jsonline in jsonlines:
text = json.loads(jsonline)["text"]
logger.info("input text: {}".format(text))
encode_plus_token = tokenizer.encode_plus(
text,
max_length=max_seq_length,
add_special_tokens=True,
return_token_type_ids=False,
padding="max_length",
return_attention_mask=True,
return_tensors="pt",
truncation=True,
)
transformed_inputs.append(encode_plus_token)
return transformed_inputs
def predict_fn(transformed_inputs, model):
predicted_classes = []
for data in transformed_inputs:
data = data.to(device)
output = model(**data)
prediction_dict = {}
prediction_dict['score'] = output[0].squeeze().cpu().detach().numpy().tolist()
jsonline = json.dumps(prediction_dict)
logger.info("jsonline: {}".format(jsonline))
predicted_classes.append(jsonline)
predicted_classes_jsonlines = "\n".join(predicted_classes)
return predicted_classes_jsonlines
def output_fn(outputs, accept="application/jsonlines"):
return outputs, accept
```
### test.py
```python
>>> from inference_korsts import model_fn, input_fn, predict_fn, output_fn
>>> with open('./samples/korsts.txt', mode='rb') as file:
>>> model_input_data = file.read()
>>> model = model_fn()
>>> transformed_inputs = input_fn(model_input_data)
>>> predicted_classes_jsonlines = predict_fn(transformed_inputs, model)
>>> model_outputs = output_fn(predicted_classes_jsonlines)
>>> print(model_outputs[0])
[{inference_korsts.py:44} INFO - input text: ['맛있는 라면을 먹고 싶어요', '후루룩 쩝쩝 후루룩 쩝쩝 맛좋은 라면']
[{inference_korsts.py:44} INFO - input text: ['뽀로로는 내친구', '머신러닝은 러닝머신이 아닙니다.']
[{inference_korsts.py:71} INFO - jsonline: {"score": 4.786738872528076}
[{inference_korsts.py:71} INFO - jsonline: {"score": 0.2319069355726242}
{"score": 4.786738872528076}
{"score": 0.2319069355726242}
```
### Sample data (samples/korsts.txt)
```
{"text": ["맛있는 라면을 먹고 싶어요", "후루룩 쩝쩝 후루룩 쩝쩝 맛좋은 라면"]}
{"text": ["뽀로로는 내친구", "머신러닝은 러닝머신이 아닙니다."]}
```
## References
- KoELECTRA: https://github.com/monologg/KoELECTRA
- KorNLI and KorSTS Dataset: https://github.com/kakaobrain/KorNLUDatasets |
913 | daekeun-ml/koelectra-small-v3-nsmc | [
"0",
"1"
] | ---
language:
- ko
tags:
- classification
license: mit
datasets:
- nsmc
widget:
- text: "불후의 명작입니다! 이렇게 감동적인 내용은 처음이에요"
example_title: "Positive"
- text: "시간이 정말 아깝습니다. 10점 만점에 1점도 아까워요.."
example_title: "Negative"
metrics:
- accuracy
- f1
- precision
- recall- accuracy
---
# Sentiment Binary Classification (fine-tuning with KoELECTRA-Small-v3 model and Naver Sentiment Movie Corpus dataset)
## Usage (Amazon SageMaker inference applicable)
It uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint.
### inference_nsmc.py
```python
import json
import sys
import logging
import torch
from torch import nn
from transformers import ElectraConfig
from transformers import ElectraModel, AutoTokenizer, ElectraTokenizer, ElectraForSequenceClassification
logging.basicConfig(
level=logging.INFO,
format='[{%(filename)s:%(lineno)d} %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(filename='tmp.log'),
logging.StreamHandler(sys.stdout)
]
)
logger = logging.getLogger(__name__)
max_seq_length = 128
classes = ['Neg', 'Pos']
tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/koelectra-small-v3-nsmc")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def model_fn(model_path=None):
####
# If you have your own trained model
# Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator'
####
#config = ElectraConfig.from_json_file(f'{model_path}/config.json')
#model = ElectraForSequenceClassification.from_pretrained(f'{model_path}/model.pth', config=config)
# Download model from the Huggingface hub
model = ElectraForSequenceClassification.from_pretrained('daekeun-ml/koelectra-small-v3-nsmc')
model.to(device)
return model
def input_fn(input_data, content_type="application/jsonlines"):
data_str = input_data.decode("utf-8")
jsonlines = data_str.split("\n")
transformed_inputs = []
for jsonline in jsonlines:
text = json.loads(jsonline)["text"][0]
logger.info("input text: {}".format(text))
encode_plus_token = tokenizer.encode_plus(
text,
max_length=max_seq_length,
add_special_tokens=True,
return_token_type_ids=False,
padding="max_length",
return_attention_mask=True,
return_tensors="pt",
truncation=True,
)
transformed_inputs.append(encode_plus_token)
return transformed_inputs
def predict_fn(transformed_inputs, model):
predicted_classes = []
for data in transformed_inputs:
data = data.to(device)
output = model(**data)
softmax_fn = nn.Softmax(dim=1)
softmax_output = softmax_fn(output[0])
_, prediction = torch.max(softmax_output, dim=1)
predicted_class_idx = prediction.item()
predicted_class = classes[predicted_class_idx]
score = softmax_output[0][predicted_class_idx]
logger.info("predicted_class: {}".format(predicted_class))
prediction_dict = {}
prediction_dict["predicted_label"] = predicted_class
prediction_dict['score'] = score.cpu().detach().numpy().tolist()
jsonline = json.dumps(prediction_dict)
logger.info("jsonline: {}".format(jsonline))
predicted_classes.append(jsonline)
predicted_classes_jsonlines = "\n".join(predicted_classes)
return predicted_classes_jsonlines
def output_fn(outputs, accept="application/jsonlines"):
return outputs, accept
```
### test.py
```python
>>> from inference_nsmc import model_fn, input_fn, predict_fn, output_fn
>>> with open('samples/nsmc.txt', mode='rb') as file:
>>> model_input_data = file.read()
>>> model = model_fn()
>>> transformed_inputs = input_fn(model_input_data)
>>> predicted_classes_jsonlines = predict_fn(transformed_inputs, model)
>>> model_outputs = output_fn(predicted_classes_jsonlines)
>>> print(model_outputs[0])
[{inference_nsmc.py:47} INFO - input text: 이 영화는 최고의 영화입니다
[{inference_nsmc.py:47} INFO - input text: 최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다
[{inference_nsmc.py:77} INFO - predicted_class: Pos
[{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Pos", "score": 0.9619030952453613}
[{inference_nsmc.py:77} INFO - predicted_class: Neg
[{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Neg", "score": 0.9994170665740967}
{"predicted_label": "Pos", "score": 0.9619030952453613}
{"predicted_label": "Neg", "score": 0.9994170665740967}
```
### Sample data (samples/nsmc.txt)
```
{"text": ["이 영화는 최고의 영화입니다"]}
{"text": ["최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다"]}
```
## References
- KoELECTRA: https://github.com/monologg/KoELECTRA
- Naver Sentiment Movie Corpus Dataset: https://github.com/e9t/nsmc |
914 | damlab/HIV_PR_resist | [
"FPV",
"IDV",
"NFV",
"SQV"
] |
---
license: mit
---
# HIV_PR_resist model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT-Protease-Resistance model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict whether an HIV protease sequence will be resistant to certain protease inhibitors. HIV-BERT is a model refined from the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV protease sequences from the [Stanford HIV Genotype-Phenotype Database](https://hivdb.stanford.edu/pages/genotype-phenotype.html), allowing even more precise prediction protease inhibitor resistance than the HIV-BERT model can provide.
## Model Description
The HIV-BERT-Protease-Resistance model is intended to predict the likelihood that an HIV protease sequence will be resistant to protease inhibitors. The protease gene is responsible for cleaving viral proteins into their active states, and as such is an ideal target for antiretroviral therapy. Annotation programs designed to predict and identify protease resistance using known mutations already exist, however with varied results. The HIV-BERT-Protease-Resistance model is designed to provide an alternative, NLP-based mechanism for predicting resistance mutations when provided with an HIV protease sequence.
## Intended Uses & Limitations
This tool can be used as a predictor of protease resistance mutations within an HIV genomic sequence. It should not be considered a clinical diagnostic tool.
## How to use
*Prediction example of protease sequences*
## Training Data
This model was trained using the [damlab/HIV-PI dataset](https://huggingface.co/datasets/damlab/HIV_PI) using the 0th fold. The dataset consists of 1959 sequences (approximately 99 tokens each) extracted from the Stanford HIV Genotype-Phenotype Database.
## Training Procedure
### Preprocessing
As with the [rostlab/Prot-bert-bfd model](https://huggingface.co/Rostlab/prot_bert_bfd), the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
The [damlab/HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be resistant to multiple drugs) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.
## Evaluation Results
*Need to add*
## BibTeX Entry and Citation Info
[More Information Needed] |
915 | damlab/HIV_V3_Coreceptor | [
"CCR5",
"CXCR4"
] | ---
license: mit
widget:
- text: 'C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C'
- text: 'C T R P N N N T R K S I H I G P G R A F Y T T G Q I I G D I R Q A Y C'
- text: 'C T R P N N N T R R S I R I G P G Q A F Y A T G D I I G D I R Q A H C'
- text: 'C G R P N N H R I K G L R I G P G R A F F A M G A I G G G E I R Q A H C'
---
# HIV_V3_coreceptor model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT-Coreceptor model was trained as a refinement of the [HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) and serves to better predict HIV V3 coreceptor tropism. HIV-BERT is a model refined from the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the [Los Alamos HIV Sequence Database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html), allowing even more precise prediction of V3 coreceptor tropism than the HIV-BERT model can provide.
## Model Description
The HIV-BERT-Coreceptor model is intended to predict the Co-receptor tropism of HIV from a segment of the envelope protein. These envelope proteins encapsulate the virus and interact with the host cell through the human CD4 receptor. HIV then requires the interaction of one, of two, co-receptors: CCR5 or CXCR4. The availability of these co-receptors on different cell types allows the virus to invade different areas of the body and evade antiretroviral therapy. The 3rd variable loop of the envelope protein, the V3 loop, is responsible for this interaction. Given a V3 loop sequence, the HIV-BERT-Coreceptor model will predict the likelihood of binding to each of these co-receptors.
## Intended Uses & Limitations
This tool can be used as a predictor of HIV tropism from the Env-V3 loop. It can recognize both R5, X4, and dual tropic viruses natively. It should not be considered a clinical diagnostic tool.
This tool was trained using the [Los Alamos HIV sequence dataset](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences.
## How to use
*Need to add*
## Training Data
This model was trained using the [damlab/HIV_V3_coreceptor dataset](https://huggingface.co/datasets/damlab/HIV_V3_coreceptor) using the 0th fold. The dataset consists of 2935 V3 sequences (approximately 35 tokens each) extracted from the [Los Alamos HIV Sequence database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html).
## Training Procedure
### Preprocessing
As with the [rostlab/Prot-bert-bfd model](https://huggingface.co/Rostlab/prot_bert_bfd), the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
The [damlab/HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can bind to CCR5, CXCR4, neither, or both) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.
## Evaluation Results
*Need to add*
## BibTeX Entry and Citation Info
[More Information Needed]
|
916 | damlab/HIV_V3_bodysite | [
"CNS",
"breast-milk",
"female-genitals",
"gastric",
"lung",
"male-genitals",
"organ",
"periphery-monocyte",
"periphery-tcell"
] | ---
licence: mit
widget:
- text: "T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
example_title: "V3 Macrophage"
- text: 'C T R P N N N T R K S I H I G P G R A F Y T T G Q I I G D I R Q A Y C'
example_title: "V3 T-cell"
datasets:
- damlab/HIV_V3_bodysite
metrics:
- accuracy
---
# Model Card for [HIV_V3_bodysite]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT-Bodysite-Identification model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict the location that an HIV V3 loop sample was derived from. HIV-BERT is a model refined from the ProtBert-BFD model (https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the Los Alamos HIV Sequence Database (https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html), allowing even more precise prediction of body site location than the HIV-BERT model can provide.
## Model Description
The HIV-BERT-Bodysite-Identification model is intended to predict the location as to where an HIV sequence was most likely derived from. Because HIV infects immune cells, it uses these as a means of rapidly spreading throughout the body. Thus, body site identification can help determine where exactly these HIV particles ultimately end up. This would be helpful when attempting to study HIV treatment strategies. When provided with an HIV genomic sequence, the HIV-BERT-Bodysite-Identification model can predict which tissue it was derived from.
## Intended Uses & Limitations
This tool can be used as a predictor of which body site an HIV sample was derived from based on its genomic sequence. It should not be considered a clinical diagnostic tool.
This tool was trained using the Los Alamos HIV sequence dataset (https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences.
## How to use
This model is able to predict the likely bodysite from a V3 sequence.
This may be use for surveillance of cells that are emerging from latent reservoirs.
Remember, a sequence can come from multiple sites, they are not mutually exclusive.
```python
from transformers import pipeline
predictor = pipeline("text-classification", model="damlab/HIV_V3_bodysite")
predictor(f"C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C")
[
[
{
"label": "periphery-tcell",
"score": 0.29097115993499756
},
{
"label": "periphery-monocyte",
"score": 0.014322502538561821
},
{
"label": "CNS",
"score": 0.06870711594820023
},
{
"label": "breast-milk",
"score": 0.002785981632769108
},
{
"label": "female-genitals",
"score": 0.024997007101774216
},
{
"label": "male-genitals",
"score": 0.01040483545511961
},
{
"label": "gastric",
"score": 0.06872137635946274
},
{
"label": "lung",
"score": 0.04432062804698944
},
{
"label": "organ",
"score": 0.47476938366889954
}
]
]
```
## Training Data
This model was trained using the damlab/HIV_V3_bodysite dataset using the 0th fold. The dataset consists of 5510 sequences (approximately 35 tokens each) extracted from the Los Alamos HIV Sequence database.
## Training Procedure
### Preprocessing
As with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
The damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be found in multiple sites) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.
## Evaluation Results
*Need to add*
## BibTeX Entry and Citation Info
[More Information Needed]
|
919 | danwilbury/xlm-roberta-base-finetuned-marc-en | [
"good",
"great",
"ok",
"poor",
"terrible"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9302
- Mae: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1253 | 1.0 | 235 | 0.9756 | 0.5488 |
| 0.9465 | 2.0 | 470 | 0.9302 | 0.5 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
921 | daveccampbell/xlm-roberta-base-finetuned-marc-en | [
"good",
"great",
"ok",
"poor",
"terrible"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9199
- Mae: 0.4756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1705 | 1.0 | 235 | 0.9985 | 0.5854 |
| 0.9721 | 2.0 | 470 | 0.9199 | 0.4756 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
922 | daveni/twitter-xlm-roberta-emotion-es | [
"anger",
"disgust",
"fear",
"joy",
"others",
"sadness",
"surprise"
] | ---
language:
- es
tags:
- Emotion Analysis
---
**Note**: This model & model card are based on the [finetuned XLM-T for Sentiment Analysis](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment)
# twitter-XLM-roBERTa-base for Emotion Analysis
This is a XLM-roBERTa-base model trained on ~198M tweets and finetuned for emotion analysis on Spanish language. This model was presented to EmoEvalEs competition, part of [IberLEF 2021 Conference](https://sites.google.com/view/iberlef2021/), where the proposed task was the classification of Spanish tweets between seven different classes: *anger*, *disgust*, *fear*, *joy*, *sadness*, *surprise*, and *other*. We achieved the first position in the competition with a macro-averaged F1 score of 71.70%.
- [Our code for EmoEvalEs submission](https://github.com/gsi-upm/emoevales-iberlef2021).
- [EmoEvalEs Dataset](https://github.com/pendrag/EmoEvalEs)
## Example Pipeline with a [Tweet from @JaSantaolalla](https://twitter.com/JaSantaolalla/status/1398383243645177860)
```python
from transformers import pipeline
model_path = "daveni/twitter-xlm-roberta-emotion-es"
emotion_analysis = pipeline("text-classification", framework="pt", model=model_path, tokenizer=model_path)
emotion_analysis("Einstein dijo: Solo hay dos cosas infinitas, el universo y los pinches anuncios de bitcoin en Twitter. Paren ya carajo aaaaaaghhgggghhh me quiero murir")
```
```
[{'label': 'anger', 'score': 0.48307016491889954}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
model_path = "daveni/twitter-xlm-roberta-emotion-es"
tokenizer = AutoTokenizer.from_pretrained(model_path )
config = AutoConfig.from_pretrained(model_path )
# PT
model = AutoModelForSequenceClassification.from_pretrained(model_path )
text = "Se ha quedao bonito día para publicar vídeo, ¿no? Hoy del tema más diferente que hemos tocado en el canal."
text = preprocess(text)
print(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
Se ha quedao bonito día para publicar vídeo, ¿no? Hoy del tema más diferente que hemos tocado en el canal.
1) joy 0.7887
2) others 0.1679
3) surprise 0.0152
4) sadness 0.0145
5) anger 0.0077
6) disgust 0.0033
7) fear 0.0027
```
#### Limitations and bias
- The dataset we used for finetuning was unbalanced, where almost half of the records belonged to the *other* class so there might be bias towards this class.
## Training data
Pretrained weights were left identical to the original model released by [cardiffnlp](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base). We used the [EmoEvalEs Dataset](https://github.com/pendrag/EmoEvalEs) for finetuning.
### BibTeX entry and citation info
```bibtex
@inproceedings{vera2021gsi,
title={GSI-UPM at IberLEF2021: Emotion Analysis of Spanish Tweets by Fine-tuning the XLM-RoBERTa Language Model},
author={Vera, D and Araque, O and Iglesias, CA},
booktitle={Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2021). CEUR Workshop Proceedings, CEUR-WS, M{\'a}laga, Spain},
year={2021}
}
``` |
923 | debjyoti007/new_doc_classifier | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | This model has been trained for the purpose of classifying text from different domains. Currently it is trained with much lesser data and it has been trained to identify text from 3 domains, "sports", "healthcare" and "financial". Label_0 represents "financial", Label_1 represents "Healthcare" and Label_2 represents "Sports". Currently I have trained it with these 3 domains only, I am pretty soon planning to train it on more domains and more data, hence its accuracy will improve further too. |
924 | dee4hf/autonlp-shajBERT-38639804 | [
"communal_attack",
"hate_speech",
"inciteful",
"personal_attack",
"political_comment",
"religious",
"religious hatred",
"religious_hatred",
"suicidal_attack"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- dee4hf/autonlp-data-shajBERT
co2_eq_emissions: 11.98841452241473
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 38639804
- CO2 Emissions (in grams): 11.98841452241473
## Validation Metrics
- Loss: 0.421400249004364
- Accuracy: 0.86783988957902
- Macro F1: 0.8669477050676501
- Micro F1: 0.86783988957902
- Weighted F1: 0.86694770506765
- Macro Precision: 0.867606300132228
- Micro Precision: 0.86783988957902
- Weighted Precision: 0.8676063001322278
- Macro Recall: 0.86783988957902
- Micro Recall: 0.86783988957902
- Weighted Recall: 0.86783988957902
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/dee4hf/autonlp-shajBERT-38639804
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dee4hf/autonlp-shajBERT-38639804", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dee4hf/autonlp-shajBERT-38639804", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
925 | deepset/bert-base-german-cased-hatespeech-GermEval18Coarse | [
"OFFENSE",
"OTHER"
] | ---
license: cc-by-4.0
---
This is a German BERT v1 (https://deepset.ai/german-bert) trained to do hate speech detection on the GermEval18Coarse dataset |
926 | deepset/gbert-base-germandpr-reranking | [
"0",
"1"
] | ---
language: de
datasets:
- deepset/germandpr
license: mit
---
## Overview
**Language model:** gbert-base-germandpr-reranking
**Language:** German
**Training data:** GermanDPR train set (~ 56MB)
**Eval data:** GermanDPR test set (~ 6MB)
**Infrastructure**: 1x V100 GPU
**Published**: June 3rd, 2021
## Details
- We trained a text pair classification model in FARM, which can be used for reranking in document retrieval tasks. To this end, the classifier calculates the similarity of the query and each retrieved top k document (e.g., k=10). The top k documents are then sorted by their similarity scores. The document most similar to the query is the best.
## Hyperparameters
```
batch_size = 16
n_epochs = 2
max_seq_len = 512 tokens for question and passage concatenated
learning_rate = 2e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
```
## Performance
We use the GermanDPR test dataset as ground truth labels and run two experiments to compare how a BM25 retriever performs with or without reranking with our model. The first experiment runs retrieval on the full German Wikipedia (more than 2 million passages) and second experiment runs retrieval on the GermanDPR dataset only (not more than 5000 passages). Both experiments use 1025 queries. Note that the second experiment is evaluating on a much simpler task because of the smaller dataset size, which explains strong BM25 retrieval performance.
### Full German Wikipedia (more than 2 million passages):
BM25 Retriever without Reranking
- recall@3: 0.4088 (419 / 1025)
- mean_reciprocal_rank@3: 0.3322
BM25 Retriever with Reranking Top 10 Documents
- recall@3: 0.5200 (533 / 1025)
- mean_reciprocal_rank@3: 0.4800
### GermanDPR Test Dataset only (not more than 5000 passages):
BM25 Retriever without Reranking
- recall@3: 0.9102 (933 / 1025)
- mean_reciprocal_rank@3: 0.8528
BM25 Retriever with Reranking Top 10 Documents
- recall@3: 0.9298 (953 / 1025)
- mean_reciprocal_rank@3: 0.8813
## Usage
### In haystack
You can load the model in [haystack](https://github.com/deepset-ai/haystack/) for reranking the documents returned by a Retriever:
```python
...
retriever = ElasticsearchRetriever(document_store=document_store)
ranker = FARMRanker(model_name_or_path="deepset/gbert-base-germandpr-reranking")
...
p = Pipeline()
p.add_node(component=retriever, name="ESRetriever", inputs=["Query"])
p.add_node(component=ranker, name="Ranker", inputs=["ESRetriever"])
)
```
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
927 | deepset/gbert-large-sts | [
"LABEL_0"
] | ---
language: de
license: mit
tags:
- exbert
---
## Overview
**Language model:** gbert-large-sts
**Language:** German
**Training data:** German STS benchmark train and dev set
**Eval data:** German STS benchmark test set
**Infrastructure**: 1x V100 GPU
**Published**: August 12th, 2021
## Details
- We trained a gbert-large model on the task of estimating semantic similarity of German-language text pairs. The dataset is a machine-translated version of the [STS benchmark](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark), which is available [here](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark).
## Hyperparameters
```
batch_size = 16
n_epochs = 4
warmup_ratio = 0.1
learning_rate = 2e-5
lr_schedule = LinearWarmup
```
## Performance
Stay tuned... and watch out for new papers on arxiv.org ;)
## Authors
- Julian Risch: `julian.risch [at] deepset.ai`
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Gutsch: `julian.gutsch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
928 | baikal-nlp/dbert-sentiment | [
"0",
"1"
] | ```
from transformers import BertForSequenceClassification, BertTokenizer, TextClassificationPipeline
model = BertForSequenceClassification.from_pretrained("deeq/dbert-sentiment")
tokenizer = BertTokenizer.from_pretrained("deeq/dbert")
nlp = TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(nlp("좋아요"))
print(nlp("글쎄요"))
```
|
929 | dennlinger/bert-wiki-paragraphs | [
"0",
"1"
] | ---
language:
- en
tags:
- sentence-similarity
- text-classification
datasets:
- dennlinger/wiki-paragraphs
metrics:
- f1
license: mit
---
# BERT-Wiki-Paragraphs
Authors: Satya Almasian\*, Dennis Aumiller\*, Lucienne-Sophie Marmé, Michael Gertz
Contact us at `<lastname>@informatik.uni-heidelberg.de`
Details for the training method can be found in our work [Structural Text Segmentation of Legal Documents](https://arxiv.org/abs/2012.03619).
The training procedure follows the same setup, but we substitute legal documents for Wikipedia in this model.
Find the associated training data here: [wiki-paragraphs](https://huggingface.co/datasets/dennlinger/wiki-paragraphs)
Training is performed in a form of weakly-supervised fashion to determine whether paragraphs topically belong together or not.
We utilize automatically generated samples from Wikipedia for training, where paragraphs from within the same section are assumed to be topically coherent.
We use the same articles as ([Koshorek et al., 2018](https://arxiv.org/abs/1803.09337)),
albeit from a 2021 dump of Wikpeida, and split at paragraph boundaries instead of the sentence level.
## Usage
Preferred usage is through `transformers.pipeline`:
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="dennlinger/bert-wiki-paragraphs")
pipe("{First paragraph} [SEP] {Second paragraph}")
```
A predicted "1" means that paragraphs belong to the same topic, a "0" indicates a disconnect.
## Training Setup
The model was trained for 3 epochs from `bert-base-uncased` on paragraph pairs (limited to 512 subwork with the `longest_first` truncation strategy).
We use a batch size of 24 wit 2 iterations gradient accumulation (effective batch size of 48), and a learning rate of 1e-4, with gradient clipping at 5.
Training was performed on a single Titan RTX GPU over the duration of 3 weeks.
|
930 | dennlinger/roberta-cls-consec | [
"LABEL_0",
"LABEL_1"
] | # About this model: Topical Change Detection in Documents
This network has been fine-tuned for the task described in the paper *Topical Change Detection in Documents via Embeddings of Long Sequences* and is our best-performing base-transformer model. You can find more detailed information in our GitHub page for the paper [here](https://github.com/dennlinger/TopicalChange), or read the [paper itself](https://arxiv.org/abs/2012.03619). The weights are based on RoBERTa-base.
# Load the model
The preferred way is through pipelines
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="dennlinger/roberta-cls-consec")
pipe("{First paragraph} [SEP] {Second paragraph}")
```
# Input Format
The model expects two segments that are separated with the `[SEP]` token. In our training setup, we had entire paragraphs as samples (or up to 512 tokens across two paragraphs), specifically trained on a Terms of Service data set. Note that this might lead to poor performance on "general" topics, such as news articles or Wikipedia.
# Training objective
The training task is to determine whether two text segments (paragraphs) belong to the same topical section or not. This can be utilized to create a topical segmentation of a document by consecutively predicting the "coherence" of two segments.
If you are experimenting via the Huggingface Model API, the following are interpretations of the `LABEL`s:
* `LABEL_0`: Two input segments separated by `[SEP]` do *not* belong to the same topic.
* `LABEL_1`: Two input segments separated by `[SEP]` do belong to the same topic.
# Performance
The results of this model can be found in the paper. We average over models from five different random seeds, which is why the specific results for this model might be different from the exact values in the paper.
Note that this model is *not* trained to work on classifying single texts, but only works with two (separated) inputs. |
932 | dhairya2303/bert-base-uncased-emotion-AD | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | This the repo for the final project |
933 | dhairya2303/bert-base-uncased-emotion_holler | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | {'sadness':0,'joy':1,'love':2,'anger':3,'fear':4,'surprise':5} |
935 | dhpollack/distilbert-dummy-sentiment | [
"negative",
"positive"
] | ---
language:
- "multilingual"
- "en"
tags:
- "sentiment-analysis"
- "testing"
- "unit tests"
---
# DistilBert Dummy Sentiment Model
## Purpose
This is a dummy model that can be used for testing the transformers `pipeline` with the task `sentiment-analysis`. It should always give random results (i.e. `{"label": "negative", "score": 0.5}`).
## How to use
```python
classifier = pipeline("sentiment-analysis", "dhpollack/distilbert-dummy-sentiment")
results = classifier(["this is a test", "another test"])
```
## Notes
This was created as follows:
1. Create a vocab.txt file (in /tmp/vocab.txt in this example).
```
[UNK]
[SEP]
[PAD]
[CLS]
[MASK]
```
2. Open a python shell:
```python
import transformers
config = transformers.DistilBertConfig(vocab_size=5, n_layers=1, n_heads=1, dim=1, hidden_dim=4 * 1, num_labels=2, id2label={0: "negative", 1: "positive"}, label2id={"negative": 0, "positive": 1})
model = transformers.DistilBertForSequenceClassification(config)
tokenizer = transformers.DistilBertTokenizer("/tmp/vocab.txt", model_max_length=512)
config.save_pretrained(".")
model.save_pretrained(".")
tokenizer.save_pretrained(".")
```
|
936 | dhtocks/tunib-electra-stereotype-classifier | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ### TUNiB-Electra Stereotype Detector
Finetuned TUNiB-Electra base with K-StereoSet.
Original Code: https://github.com/newfull5/Stereotype-Detector |
937 | digitalepidemiologylab/covid-twitter-bert-v2-mnli | [
"contradiction",
"neutral",
"entailment"
] | ---
language:
- en
thumbnail: https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png
tags:
- Twitter
- COVID-19
- text-classification
- pytorch
- tensorflow
- bert
license: mit
datasets:
- mnli
pipeline_tag: zero-shot-classification
widget:
- text: To stop the pandemic it is important that everyone turns up for their shots.
candidate_labels: health, sport, vaccine, guns
---
# COVID-Twitter-BERT v2 MNLI
## Model description
This model provides a zero-shot classifier to be used in cases where it is not possible to finetune CT-BERT on a specific task, due to lack of labelled data.
The technique is based on [Yin et al.](https://arxiv.org/abs/1909.00161).
The article describes a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers.
The model is already finetuned on 400'000 generaic logical tasks.
We can then use it as a zero-shot classifier by reformulating the classification task as a question.
Let's say we want to classify COVID-tweets as vaccine-related and not vaccine-related.
The typical way would be to collect a few hunder pre-annotated tweets and organise them in two classes.
Then you would finetune the model on this.
With the zero-shot mnli-classifier, you can instead reformulate your question as "This text is about vaccines", and use this directly on inference - without any training.
Find more info about the model on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert).
## Usage
Please note that how you formulate the question can give slightly different results.
Collecting a training set and finetuning on this, will most likely give you better accuracy.
The easiest way to try this out is by using the Hugging Face pipeline.
This uses the default Enlish template where it puts the text "This example is " in front of the text.
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="digitalepidemiologylab/covid-twitter-bert-v2-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = 'To stop the pandemic it is important that everyone turns up for their shots.'
candidate_labels = ['health', 'sport', 'vaccine','guns']
hypothesis_template = 'This example is {}.'
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True)
```
## Training procedure
The model is finetuned on the 400k large [MNLI-task](https://cims.nyu.edu/~sbowman/multinli/).
## References
```bibtex
@article{muller2020covid,
title={COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter},
author={M{\"u}ller, Martin and Salath{\'e}, Marcel and Kummervold, Per E},
journal={arXiv preprint arXiv:2005.07503},
year={2020}
}
```
or
```
Martin Müller, Marcel Salathé, and Per E. Kummervold.
COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter.
arXiv preprint arXiv:2005.07503 (2020).
```
|
938 | diwank/dyda-deberta-pair | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: mit
---
# diwank/dyda-deberta-pair
Deberta-based Daily Dialog style dialog-act annotations classification model. It takes two sentences as inputs (one previous and one current of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. Outputs one of four labels (exactly as in the [daily-dialog dataset](https://huggingface.co/datasets/daily_dialog) ): *__dummy__ (0), inform (1), question (2), directive (3), commissive (4)*
## Usage
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/dyda-deberta-pair")
convert_to_label = lambda n: ["__dummy__ (0), inform (1), question (2), directive (3), commissive (4)".split(', ')[i] for i in n]
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label(predictions) # inform (1)
``` |
939 | diwank/maptask-deberta-pair | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: mit
---
# maptask-deberta-pair
Deberta-based Daily MapTask style dialog-act annotations classification model
## Example
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/maptask-deberta-pair")
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label = lambda n: ["acknowledge (0), align (1), check (2), clarify (3), explain (4), instruct (5), query_w (6), query_yn (7), ready (8), reply_n (9), reply_w (10), reply_y (11)".split(', ')[i] for i in n]
convert_to_label(predictions) # reply_n (9)
``` |
940 | diwank/silicone-deberta-pair | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: mit
---
# diwank/silicone-deberta-pair
`deberta-base`-based dialog acts classifier. Trained on the `balanced` variant of the [silicone-merged](https://huggingface.co/datasets/diwank/silicone-merged) dataset: a simplified merged dialog act data from datasets in the [silicone](https://huggingface.co/datasets/silicone) collection.
Takes two sentences as inputs (one previous and one current utterance of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. **Outputs one of 11 labels**:
```python
(0, 'acknowledge')
(1, 'answer')
(2, 'backchannel')
(3, 'reply_yes')
(4, 'exclaim')
(5, 'say')
(6, 'reply_no')
(7, 'hold')
(8, 'ask')
(9, 'intent')
(10, 'ask_yes_no')
```
## Example:
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/silicone-deberta-pair")
convert_to_label = lambda n: [
['acknowledge',
'answer',
'backchannel',
'reply_yes',
'exclaim',
'say',
'reply_no',
'hold',
'ask',
'intent',
'ask_yes_no'
][i] for i in n
]
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label(predictions) # answer
```
## Report from W&B
https://wandb.ai/diwank/da-silicone-combined/reports/silicone-deberta-pair--VmlldzoxNTczNjE5?accessToken=yj1jz4c365z0y5b3olgzye7qgsl7qv9lxvqhmfhtb6300hql6veqa5xiq1skn8ys |
941 | dkhara/bert-news | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"L... | ### Bert-News |
942 | dmiller1/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9261144741040841
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8436 | 1.0 | 250 | 0.3175 | 0.9105 | 0.9081 |
| 0.2492 | 2.0 | 500 | 0.2161 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.7.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
943 | dpalominop/spanish-bert-apoyo | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("dpalominop/spanish-bert-apoyo")
model = AutoModelForSequenceClassification.from_pretrained("dpalominop/spanish-bert-apoyo")
``` |
944 | dreji18/mymodel | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | This is just a test |
945 | ds198799/autonlp-predict_ROI_1-29797722 | [
"1.0",
"2.0",
"3.0"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- ds198799/autonlp-data-predict_ROI_1
co2_eq_emissions: 2.7516207978192737
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29797722
- CO2 Emissions (in grams): 2.7516207978192737
## Validation Metrics
- Loss: 0.6113826036453247
- Accuracy: 0.7559139784946236
- Macro F1: 0.4594734612976928
- Micro F1: 0.7559139784946236
- Weighted F1: 0.7195080232106192
- Macro Precision: 0.7175166413412577
- Micro Precision: 0.7559139784946236
- Weighted Precision: 0.7383048259333735
- Macro Recall: 0.4482203645846237
- Micro Recall: 0.7559139784946236
- Weighted Recall: 0.7559139784946236
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ds198799/autonlp-predict_ROI_1-29797722
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ds198799/autonlp-predict_ROI_1-29797722", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ds198799/autonlp-predict_ROI_1-29797722", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
946 | ds198799/autonlp-predict_ROI_1-29797730 | [
"1.0",
"2.0",
"3.0"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- ds198799/autonlp-data-predict_ROI_1
co2_eq_emissions: 2.2439127664461718
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29797730
- CO2 Emissions (in grams): 2.2439127664461718
## Validation Metrics
- Loss: 0.6314184069633484
- Accuracy: 0.7596774193548387
- Macro F1: 0.4740565300039588
- Micro F1: 0.7596774193548386
- Weighted F1: 0.7371623804622154
- Macro Precision: 0.6747804619412134
- Micro Precision: 0.7596774193548387
- Weighted Precision: 0.7496542175358931
- Macro Recall: 0.47743727441146655
- Micro Recall: 0.7596774193548387
- Weighted Recall: 0.7596774193548387
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ds198799/autonlp-predict_ROI_1-29797730
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ds198799/autonlp-predict_ROI_1-29797730", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ds198799/autonlp-predict_ROI_1-29797730", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
947 | dtam/autonlp-covid-fake-news-36839110 | [
"0",
"1"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- dtam/autonlp-data-covid-fake-news
co2_eq_emissions: 123.79523392848652
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 36839110
- CO2 Emissions (in grams): 123.79523392848652
## Validation Metrics
- Loss: 0.17188367247581482
- Accuracy: 0.9714953271028037
- Precision: 0.9917948717948718
- Recall: 0.9480392156862745
- AUC: 0.9947452731092438
- F1: 0.9694235588972432
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/dtam/autonlp-covid-fake-news-36839110
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dtam/autonlp-covid-fake-news-36839110", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dtam/autonlp-covid-fake-news-36839110", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
953 | edumunozsala/RuPERTa_base_sentiment_analysis_es | [
"Negativo",
"Positivo"
] | ---
language: es
tags:
- sagemaker
- ruperta
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
model-index:
name: RuPERTa_base_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
- dataset:
name: "IMDb Reviews in Spanish"
type: IMDbreviews_es
- metrics:
- name: Accuracy,
type: accuracy,
value: 0.881866
- name: F1 Score,
type: f1,
value: 0.008272
- name: Precision,
type: precision,
value: 0.858605
- name: Recall,
type: recall,
value: 0.920062
widget:
- text: "Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
---
## Model `RuPERTa_base_sentiment_analysis_es`
### **A finetuned model for Sentiment analysis in Spanish**
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container,
The base model is **RuPERTa-base (uncased)** which is a RoBERTa model trained on a uncased version of big Spanish corpus.
It was trained by mrm8488, Manuel Romero.[Link to base model](https://huggingface.co/mrm8488/RuPERTa-base)
## Dataset
The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages.
Sizes of datasets:
- Train dataset: 42,500
- Validation dataset: 3,750
- Test dataset: 3,750
## Hyperparameters
{
"epochs": "4",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "3e-05",
"model_name": "\"mrm8488/RuPERTa-base\"",
"sagemaker_container_log_level": "20",
"sagemaker_program": "\"train.py\"",
}
## Evaluation results
Accuracy = 0.8629333333333333
F1 Score = 0.8648790746582545
Precision = 0.8479381443298969
Recall = 0.8825107296137339
## Test results
Accuracy = 0.8066666666666666
F1 Score = 0.8057862309134743
Precision = 0.7928307854507116
Recall = 0.8191721132897604
## Model in action
### Usage for Sentiment Analysis
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("edumunozsala/RuPERTa_base_sentiment_analysis_es")
model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/RuPERTa_base_sentiment_analysis_es")
text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
output = outputs.logits.argmax(1)
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
|
954 | ehddnr301/bert-base-ehddnr-ynat | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model_index:
- name: bert-base-ehddnr-ynat
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: ynat
metric:
name: F1
type: f1
value: 0.8720568553403009
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-ehddnr-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3587
- F1: 0.8721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.4398 | 0.8548 |
| No log | 2.0 | 358 | 0.3587 | 0.8721 |
| 0.3859 | 3.0 | 537 | 0.3639 | 0.8707 |
| 0.3859 | 4.0 | 716 | 0.3592 | 0.8692 |
| 0.3859 | 5.0 | 895 | 0.3646 | 0.8717 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
955 | ehdwns1516/klue-roberta-base-kornli | [
"entailment",
"neutral",
"contradiction"
] | # klue-roberta-base-kornli
* This model trained with Korean dataset.
* Input premise sentence and hypothesis sentence.
* You can use English, but don't expect accuracy.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
klue-roberta-base-kornli DEMO: [Ainize DEMO](https://main-klue-roberta-base-kornli-ehdwns1516.endpoint.ainize.ai/)
klue-roberta-base-kornli API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/klue-roberta-base_kornli)
## Overview
Language model: [klue/roberta-base](https://huggingface.co/klue/roberta-base)
Language: Korean
Training data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI)
Eval data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI)
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/klue-roberta-base_finetunning_ex)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/klue-roberta-base-kornli")
classifier = pipeline(
"text-classification",
model="ehdwns1516/klue-roberta-base-kornli",
return_all_scores=True,
)
premise = "your premise"
hypothesis = "your hypothesis"
result = dict()
result[0] = classifier(premise + tokenizer.sep_token + hypothesis)[0]
```
|
956 | ehdwns1516/klue-roberta-base_sae | [
"yes/no",
"alternative",
"wh- questions",
"prohibitions",
"requirements",
"strong requirements"
] | # klue-roberta-base-sae
* This model trained with Korean dataset.
* Input sentence what you want to grasp intent.
* You can use English, but don't expect accuracy.
klue-roberta-base-kornli DEMO: [Ainize DEMO](https://main-klue-roberta-base-kornli-ehdwns1516.endpoint.ainize.ai/)
klue-roberta-base-kornli API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/KLUE-RoBERTa-base_sae)
## Overview
Language model: [klue/roberta-base](https://huggingface.co/klue/roberta-base)
Language: Korean
Training data: [kor_sae](https://huggingface.co/datasets/kor_sae)
Eval data: [kor_sae](https://huggingface.co/datasets/kor_sae)
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/KLUE-RoBERTa-base_sae_notebook)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/klue-roberta-base-sae")
classifier = pipeline(
"text-classification",
model="ehdwns1516/klue-roberta-base-kornli",
return_all_scores=True,
)
context = "sentence what you want to grasp intent"
result = dict()
result[0] = classifier(context)[0]
```
|
957 | eliza-dukim/bert-base-finetuned-sts-deprecated | [
"LABEL_0"
] | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
model_index:
- name: bert-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metric:
name: Pearsonr
type: pearsonr
value: 0.837527365741951
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5657
- Pearsonr: 0.8375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 92 | 0.8280 | 0.7680 |
| No log | 2.0 | 184 | 0.6602 | 0.8185 |
| No log | 3.0 | 276 | 0.5939 | 0.8291 |
| No log | 4.0 | 368 | 0.5765 | 0.8367 |
| No log | 5.0 | 460 | 0.5657 | 0.8375 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
958 | eliza-dukim/bert-base-finetuned-sts | [
"LABEL_0"
] | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
- f1
model-index:
- name: bert-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metrics:
- name: Pearsonr
type: pearsonr
value: 0.8756147003619346
- name: F1
type: f1
value: 0.8416666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4115
- Pearsonr: 0.8756
- F1: 0.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7836 | 1.0 | 365 | 0.5507 | 0.8435 | 0.8121 |
| 0.1564 | 2.0 | 730 | 0.4396 | 0.8495 | 0.8136 |
| 0.0989 | 3.0 | 1095 | 0.4115 | 0.8756 | 0.8417 |
| 0.0682 | 4.0 | 1460 | 0.4466 | 0.8746 | 0.8449 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
959 | eliza-dukim/bert-base-finetuned-ynat | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model_index:
- name: bert-base-finetuned-ynat
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: ynat
metric:
name: F1
type: f1
value: 0.8699556378491373
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3741
- F1: 0.8700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.4458 | 0.8516 |
| No log | 2.0 | 358 | 0.3741 | 0.8700 |
| 0.385 | 3.0 | 537 | 0.3720 | 0.8693 |
| 0.385 | 4.0 | 716 | 0.3744 | 0.8689 |
| 0.385 | 5.0 | 895 | 0.3801 | 0.8695 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
960 | elozano/tweet_emotion_eval | [
"Anger",
"Joy",
"Optimism",
"Sadness"
] | ---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "Stop sharing which songs did you listen to during this year on Spotify, NOBODY CARES"
example_title: "Anger"
- text: "I love that joke HAHAHAHAHA"
example_title: "Joy"
- text: "Despite I've not studied a lot for this exam, I think I will pass 😜"
example_title: "Optimism"
- text: "My dog died this morning..."
example_title: "Sadness"
---
|
961 | elozano/tweet_offensive_eval | [
"Non-Offensive",
"Offensive"
] | ---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "You're a complete idiot!"
example_title: "Offensive"
- text: "I am tired of studying for tomorrow's exam"
example_title: "Non-Offensive"
---
|
962 | elozano/tweet_sentiment_eval | [
"Negative",
"Neutral",
"Positive"
] | ---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "I love summer!"
example_title: "Positive"
- text: "Does anyone want to play?"
example_title: "Neutral"
- text: "This movie is just awful! 😫"
example_title: "Negative"
---
|
963 | emekaboris/autonlp-new_tx-607517182 | [
"1.0",
"2.0",
"3.0",
"4.0",
"5.0",
"6.0",
"7.0",
"8.0"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- emekaboris/autonlp-data-new_tx
co2_eq_emissions: 3.842950628218143
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 607517182
- CO2 Emissions (in grams): 3.842950628218143
## Validation Metrics
- Loss: 0.4033123552799225
- Accuracy: 0.8679706601466992
- Macro F1: 0.719846919916469
- Micro F1: 0.8679706601466993
- Weighted F1: 0.8622411469250695
- Macro Precision: 0.725309168791155
- Micro Precision: 0.8679706601466992
- Weighted Precision: 0.8604370906049568
- Macro Recall: 0.7216672806300003
- Micro Recall: 0.8679706601466992
- Weighted Recall: 0.8679706601466992
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/emekaboris/autonlp-new_tx-607517182
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emekaboris/autonlp-new_tx-607517182", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("emekaboris/autonlp-new_tx-607517182", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
964 | emekaboris/autonlp-txc-17923124 | [
"1.0",
"10.0",
"11.0",
"12.0",
"13.0",
"14.0",
"15.0",
"16.0",
"17.0",
"18.0",
"19.0",
"2.0",
"20.0",
"21.0",
"22.0",
"23.0",
"24.0",
"3.0",
"4.0",
"5.0",
"6.0",
"7.0",
"8.0",
"9.0"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- emekaboris/autonlp-data-txc
co2_eq_emissions: 133.57087522185148
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 17923124
- CO2 Emissions (in grams): 133.57087522185148
## Validation Metrics
- Loss: 0.2080804407596588
- Accuracy: 0.9325402190077058
- Macro F1: 0.7283811287183823
- Micro F1: 0.9325402190077058
- Weighted F1: 0.9315711955594153
- Macro Precision: 0.8106599661500661
- Micro Precision: 0.9325402190077058
- Weighted Precision: 0.9324644116921059
- Macro Recall: 0.7020515544343829
- Micro Recall: 0.9325402190077058
- Weighted Recall: 0.9325402190077058
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/emekaboris/autonlp-txc-17923124
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emekaboris/autonlp-txc-17923124", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("emekaboris/autonlp-txc-17923124", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
965 | emekaboris/autonlp-txc-17923129 | [
"1.0",
"10.0",
"11.0",
"12.0",
"13.0",
"14.0",
"15.0",
"16.0",
"17.0",
"18.0",
"19.0",
"2.0",
"20.0",
"21.0",
"22.0",
"23.0",
"24.0",
"3.0",
"4.0",
"5.0",
"6.0",
"7.0",
"8.0",
"9.0"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- emekaboris/autonlp-data-txc
co2_eq_emissions: 610.861733873082
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 17923129
- CO2 Emissions (in grams): 610.861733873082
## Validation Metrics
- Loss: 0.2319454699754715
- Accuracy: 0.9264228741381642
- Macro F1: 0.6730537318152493
- Micro F1: 0.9264228741381642
- Weighted F1: 0.9251493598895151
- Macro Precision: 0.7767479491141245
- Micro Precision: 0.9264228741381642
- Weighted Precision: 0.9277971545757154
- Macro Recall: 0.6617262519071917
- Micro Recall: 0.9264228741381642
- Weighted Recall: 0.9264228741381642
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/emekaboris/autonlp-txc-17923129
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emekaboris/autonlp-txc-17923129", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("emekaboris/autonlp-txc-17923129", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
969 | emrecan/bert-base-multilingual-cased-allnli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: mit
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased_allnli_tr
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6144
- Accuracy: 0.7662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8623 | 0.03 | 1000 | 0.9076 | 0.5917 |
| 0.7528 | 0.07 | 2000 | 0.8587 | 0.6119 |
| 0.7074 | 0.1 | 3000 | 0.7867 | 0.6647 |
| 0.6949 | 0.14 | 4000 | 0.7474 | 0.6772 |
| 0.6681 | 0.17 | 5000 | 0.7661 | 0.6814 |
| 0.6597 | 0.2 | 6000 | 0.7264 | 0.6943 |
| 0.6495 | 0.24 | 7000 | 0.7841 | 0.6781 |
| 0.6323 | 0.27 | 8000 | 0.7256 | 0.6952 |
| 0.6308 | 0.31 | 9000 | 0.7319 | 0.6958 |
| 0.6254 | 0.34 | 10000 | 0.7054 | 0.7004 |
| 0.6233 | 0.37 | 11000 | 0.7069 | 0.7085 |
| 0.6165 | 0.41 | 12000 | 0.6880 | 0.7181 |
| 0.6033 | 0.44 | 13000 | 0.6844 | 0.7197 |
| 0.6014 | 0.48 | 14000 | 0.6753 | 0.7129 |
| 0.5947 | 0.51 | 15000 | 0.7000 | 0.7039 |
| 0.5965 | 0.54 | 16000 | 0.6708 | 0.7263 |
| 0.5979 | 0.58 | 17000 | 0.6562 | 0.7285 |
| 0.5787 | 0.61 | 18000 | 0.6554 | 0.7297 |
| 0.58 | 0.65 | 19000 | 0.6544 | 0.7315 |
| 0.574 | 0.68 | 20000 | 0.6549 | 0.7339 |
| 0.5751 | 0.71 | 21000 | 0.6545 | 0.7289 |
| 0.5659 | 0.75 | 22000 | 0.6467 | 0.7371 |
| 0.5732 | 0.78 | 23000 | 0.6448 | 0.7362 |
| 0.5637 | 0.82 | 24000 | 0.6520 | 0.7355 |
| 0.5648 | 0.85 | 25000 | 0.6412 | 0.7345 |
| 0.5622 | 0.88 | 26000 | 0.6350 | 0.7358 |
| 0.5579 | 0.92 | 27000 | 0.6347 | 0.7393 |
| 0.5518 | 0.95 | 28000 | 0.6417 | 0.7392 |
| 0.5547 | 0.99 | 29000 | 0.6321 | 0.7437 |
| 0.524 | 1.02 | 30000 | 0.6430 | 0.7412 |
| 0.4982 | 1.05 | 31000 | 0.6253 | 0.7458 |
| 0.5002 | 1.09 | 32000 | 0.6316 | 0.7418 |
| 0.4993 | 1.12 | 33000 | 0.6197 | 0.7487 |
| 0.4963 | 1.15 | 34000 | 0.6307 | 0.7462 |
| 0.504 | 1.19 | 35000 | 0.6272 | 0.7480 |
| 0.4922 | 1.22 | 36000 | 0.6410 | 0.7433 |
| 0.5016 | 1.26 | 37000 | 0.6295 | 0.7461 |
| 0.4957 | 1.29 | 38000 | 0.6183 | 0.7506 |
| 0.4883 | 1.32 | 39000 | 0.6261 | 0.7502 |
| 0.4985 | 1.36 | 40000 | 0.6315 | 0.7496 |
| 0.4885 | 1.39 | 41000 | 0.6189 | 0.7529 |
| 0.4909 | 1.43 | 42000 | 0.6189 | 0.7473 |
| 0.4894 | 1.46 | 43000 | 0.6314 | 0.7433 |
| 0.4912 | 1.49 | 44000 | 0.6184 | 0.7446 |
| 0.4851 | 1.53 | 45000 | 0.6258 | 0.7461 |
| 0.4879 | 1.56 | 46000 | 0.6286 | 0.7480 |
| 0.4907 | 1.6 | 47000 | 0.6196 | 0.7512 |
| 0.4884 | 1.63 | 48000 | 0.6157 | 0.7526 |
| 0.4755 | 1.66 | 49000 | 0.6056 | 0.7591 |
| 0.4811 | 1.7 | 50000 | 0.5977 | 0.7582 |
| 0.4787 | 1.73 | 51000 | 0.5915 | 0.7621 |
| 0.4779 | 1.77 | 52000 | 0.6014 | 0.7583 |
| 0.4767 | 1.8 | 53000 | 0.6041 | 0.7623 |
| 0.4737 | 1.83 | 54000 | 0.6093 | 0.7563 |
| 0.4836 | 1.87 | 55000 | 0.6001 | 0.7568 |
| 0.4765 | 1.9 | 56000 | 0.6109 | 0.7601 |
| 0.4776 | 1.94 | 57000 | 0.6046 | 0.7599 |
| 0.4769 | 1.97 | 58000 | 0.5970 | 0.7568 |
| 0.4654 | 2.0 | 59000 | 0.6147 | 0.7614 |
| 0.4144 | 2.04 | 60000 | 0.6439 | 0.7566 |
| 0.4101 | 2.07 | 61000 | 0.6373 | 0.7527 |
| 0.4192 | 2.11 | 62000 | 0.6136 | 0.7575 |
| 0.4128 | 2.14 | 63000 | 0.6283 | 0.7560 |
| 0.4204 | 2.17 | 64000 | 0.6187 | 0.7625 |
| 0.4114 | 2.21 | 65000 | 0.6127 | 0.7621 |
| 0.4097 | 2.24 | 66000 | 0.6188 | 0.7626 |
| 0.4129 | 2.28 | 67000 | 0.6156 | 0.7639 |
| 0.4085 | 2.31 | 68000 | 0.6232 | 0.7616 |
| 0.4074 | 2.34 | 69000 | 0.6240 | 0.7605 |
| 0.409 | 2.38 | 70000 | 0.6153 | 0.7591 |
| 0.4046 | 2.41 | 71000 | 0.6375 | 0.7587 |
| 0.4117 | 2.45 | 72000 | 0.6145 | 0.7629 |
| 0.4002 | 2.48 | 73000 | 0.6279 | 0.7610 |
| 0.4042 | 2.51 | 74000 | 0.6176 | 0.7646 |
| 0.4055 | 2.55 | 75000 | 0.6277 | 0.7643 |
| 0.4021 | 2.58 | 76000 | 0.6196 | 0.7642 |
| 0.4081 | 2.62 | 77000 | 0.6127 | 0.7659 |
| 0.408 | 2.65 | 78000 | 0.6237 | 0.7638 |
| 0.3997 | 2.68 | 79000 | 0.6190 | 0.7636 |
| 0.4093 | 2.72 | 80000 | 0.6152 | 0.7648 |
| 0.4095 | 2.75 | 81000 | 0.6155 | 0.7627 |
| 0.4088 | 2.79 | 82000 | 0.6130 | 0.7641 |
| 0.4063 | 2.82 | 83000 | 0.6072 | 0.7646 |
| 0.3978 | 2.85 | 84000 | 0.6128 | 0.7662 |
| 0.4034 | 2.89 | 85000 | 0.6157 | 0.7627 |
| 0.4044 | 2.92 | 86000 | 0.6127 | 0.7661 |
| 0.403 | 2.96 | 87000 | 0.6126 | 0.7664 |
| 0.4033 | 2.99 | 88000 | 0.6144 | 0.7662 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
970 | emrecan/bert-base-multilingual-cased-multinli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
971 | emrecan/bert-base-multilingual-cased-snli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
972 | emrecan/bert-base-turkish-cased-allnli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: mit
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-turkish-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5771
- Accuracy: 0.7978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8559 | 0.03 | 1000 | 0.7577 | 0.6798 |
| 0.6612 | 0.07 | 2000 | 0.7263 | 0.6958 |
| 0.6115 | 0.1 | 3000 | 0.6431 | 0.7364 |
| 0.5916 | 0.14 | 4000 | 0.6347 | 0.7407 |
| 0.5719 | 0.17 | 5000 | 0.6317 | 0.7483 |
| 0.5575 | 0.2 | 6000 | 0.6034 | 0.7544 |
| 0.5521 | 0.24 | 7000 | 0.6148 | 0.7568 |
| 0.5393 | 0.27 | 8000 | 0.5931 | 0.7610 |
| 0.5382 | 0.31 | 9000 | 0.5866 | 0.7665 |
| 0.5306 | 0.34 | 10000 | 0.5881 | 0.7594 |
| 0.5295 | 0.37 | 11000 | 0.6120 | 0.7632 |
| 0.5225 | 0.41 | 12000 | 0.5620 | 0.7759 |
| 0.5112 | 0.44 | 13000 | 0.5641 | 0.7769 |
| 0.5133 | 0.48 | 14000 | 0.5571 | 0.7798 |
| 0.5023 | 0.51 | 15000 | 0.5719 | 0.7722 |
| 0.5017 | 0.54 | 16000 | 0.5482 | 0.7844 |
| 0.5111 | 0.58 | 17000 | 0.5503 | 0.7800 |
| 0.4929 | 0.61 | 18000 | 0.5502 | 0.7836 |
| 0.4923 | 0.65 | 19000 | 0.5424 | 0.7843 |
| 0.4894 | 0.68 | 20000 | 0.5417 | 0.7851 |
| 0.4877 | 0.71 | 21000 | 0.5514 | 0.7841 |
| 0.4818 | 0.75 | 22000 | 0.5494 | 0.7848 |
| 0.4898 | 0.78 | 23000 | 0.5450 | 0.7859 |
| 0.4823 | 0.82 | 24000 | 0.5417 | 0.7878 |
| 0.4806 | 0.85 | 25000 | 0.5354 | 0.7875 |
| 0.4779 | 0.88 | 26000 | 0.5338 | 0.7848 |
| 0.4744 | 0.92 | 27000 | 0.5277 | 0.7934 |
| 0.4678 | 0.95 | 28000 | 0.5507 | 0.7871 |
| 0.4727 | 0.99 | 29000 | 0.5603 | 0.7789 |
| 0.4243 | 1.02 | 30000 | 0.5626 | 0.7894 |
| 0.3955 | 1.05 | 31000 | 0.5324 | 0.7939 |
| 0.4022 | 1.09 | 32000 | 0.5322 | 0.7925 |
| 0.3976 | 1.12 | 33000 | 0.5450 | 0.7920 |
| 0.3913 | 1.15 | 34000 | 0.5464 | 0.7948 |
| 0.406 | 1.19 | 35000 | 0.5406 | 0.7958 |
| 0.3875 | 1.22 | 36000 | 0.5489 | 0.7878 |
| 0.4024 | 1.26 | 37000 | 0.5427 | 0.7925 |
| 0.3988 | 1.29 | 38000 | 0.5335 | 0.7904 |
| 0.393 | 1.32 | 39000 | 0.5415 | 0.7923 |
| 0.3988 | 1.36 | 40000 | 0.5385 | 0.7962 |
| 0.3912 | 1.39 | 41000 | 0.5383 | 0.7950 |
| 0.3949 | 1.43 | 42000 | 0.5415 | 0.7931 |
| 0.3902 | 1.46 | 43000 | 0.5438 | 0.7893 |
| 0.3948 | 1.49 | 44000 | 0.5348 | 0.7906 |
| 0.3921 | 1.53 | 45000 | 0.5361 | 0.7890 |
| 0.3944 | 1.56 | 46000 | 0.5419 | 0.7953 |
| 0.3959 | 1.6 | 47000 | 0.5402 | 0.7967 |
| 0.3926 | 1.63 | 48000 | 0.5429 | 0.7925 |
| 0.3854 | 1.66 | 49000 | 0.5346 | 0.7959 |
| 0.3864 | 1.7 | 50000 | 0.5241 | 0.7979 |
| 0.385 | 1.73 | 51000 | 0.5149 | 0.8002 |
| 0.3871 | 1.77 | 52000 | 0.5325 | 0.8002 |
| 0.3819 | 1.8 | 53000 | 0.5332 | 0.8022 |
| 0.384 | 1.83 | 54000 | 0.5419 | 0.7873 |
| 0.3899 | 1.87 | 55000 | 0.5225 | 0.7974 |
| 0.3894 | 1.9 | 56000 | 0.5358 | 0.7977 |
| 0.3838 | 1.94 | 57000 | 0.5264 | 0.7988 |
| 0.3881 | 1.97 | 58000 | 0.5280 | 0.7956 |
| 0.3756 | 2.0 | 59000 | 0.5601 | 0.7969 |
| 0.3156 | 2.04 | 60000 | 0.5936 | 0.7925 |
| 0.3125 | 2.07 | 61000 | 0.5898 | 0.7938 |
| 0.3179 | 2.11 | 62000 | 0.5591 | 0.7981 |
| 0.315 | 2.14 | 63000 | 0.5853 | 0.7970 |
| 0.3122 | 2.17 | 64000 | 0.5802 | 0.7979 |
| 0.3105 | 2.21 | 65000 | 0.5758 | 0.7979 |
| 0.3076 | 2.24 | 66000 | 0.5685 | 0.7980 |
| 0.3117 | 2.28 | 67000 | 0.5799 | 0.7944 |
| 0.3108 | 2.31 | 68000 | 0.5742 | 0.7988 |
| 0.3047 | 2.34 | 69000 | 0.5907 | 0.7921 |
| 0.3114 | 2.38 | 70000 | 0.5723 | 0.7937 |
| 0.3035 | 2.41 | 71000 | 0.5944 | 0.7955 |
| 0.3129 | 2.45 | 72000 | 0.5838 | 0.7928 |
| 0.3071 | 2.48 | 73000 | 0.5929 | 0.7949 |
| 0.3061 | 2.51 | 74000 | 0.5794 | 0.7967 |
| 0.3068 | 2.55 | 75000 | 0.5892 | 0.7954 |
| 0.3053 | 2.58 | 76000 | 0.5796 | 0.7962 |
| 0.3117 | 2.62 | 77000 | 0.5763 | 0.7981 |
| 0.3062 | 2.65 | 78000 | 0.5852 | 0.7964 |
| 0.3004 | 2.68 | 79000 | 0.5793 | 0.7966 |
| 0.3146 | 2.72 | 80000 | 0.5693 | 0.7985 |
| 0.3146 | 2.75 | 81000 | 0.5788 | 0.7982 |
| 0.3079 | 2.79 | 82000 | 0.5726 | 0.7978 |
| 0.3058 | 2.82 | 83000 | 0.5677 | 0.7988 |
| 0.3055 | 2.85 | 84000 | 0.5701 | 0.7982 |
| 0.3049 | 2.89 | 85000 | 0.5809 | 0.7970 |
| 0.3044 | 2.92 | 86000 | 0.5741 | 0.7986 |
| 0.3057 | 2.96 | 87000 | 0.5743 | 0.7980 |
| 0.3081 | 2.99 | 88000 | 0.5771 | 0.7978 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
973 | emrecan/bert-base-turkish-cased-multinli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
974 | emrecan/bert-base-turkish-cased-snli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
975 | emrecan/convbert-base-turkish-mc4-cased-allnli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convbert-base-turkish-mc4-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/convbert-base-turkish-mc4-cased](https://huggingface.co/dbmdz/convbert-base-turkish-mc4-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5541
- Accuracy: 0.8111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7338 | 0.03 | 1000 | 0.6722 | 0.7236 |
| 0.603 | 0.07 | 2000 | 0.6465 | 0.7399 |
| 0.5605 | 0.1 | 3000 | 0.5801 | 0.7728 |
| 0.55 | 0.14 | 4000 | 0.5994 | 0.7626 |
| 0.529 | 0.17 | 5000 | 0.5720 | 0.7697 |
| 0.5196 | 0.2 | 6000 | 0.5692 | 0.7769 |
| 0.5117 | 0.24 | 7000 | 0.5725 | 0.7785 |
| 0.5044 | 0.27 | 8000 | 0.5532 | 0.7787 |
| 0.5016 | 0.31 | 9000 | 0.5546 | 0.7812 |
| 0.5031 | 0.34 | 10000 | 0.5461 | 0.7870 |
| 0.4949 | 0.37 | 11000 | 0.5725 | 0.7826 |
| 0.4894 | 0.41 | 12000 | 0.5419 | 0.7933 |
| 0.4796 | 0.44 | 13000 | 0.5278 | 0.7914 |
| 0.4795 | 0.48 | 14000 | 0.5193 | 0.7953 |
| 0.4713 | 0.51 | 15000 | 0.5534 | 0.7771 |
| 0.4738 | 0.54 | 16000 | 0.5098 | 0.8039 |
| 0.481 | 0.58 | 17000 | 0.5244 | 0.7958 |
| 0.4634 | 0.61 | 18000 | 0.5215 | 0.7972 |
| 0.465 | 0.65 | 19000 | 0.5129 | 0.7985 |
| 0.4624 | 0.68 | 20000 | 0.5062 | 0.8047 |
| 0.4597 | 0.71 | 21000 | 0.5114 | 0.8029 |
| 0.4571 | 0.75 | 22000 | 0.5070 | 0.8073 |
| 0.4602 | 0.78 | 23000 | 0.5115 | 0.7993 |
| 0.4552 | 0.82 | 24000 | 0.5085 | 0.8052 |
| 0.4538 | 0.85 | 25000 | 0.5118 | 0.7974 |
| 0.4517 | 0.88 | 26000 | 0.5036 | 0.8044 |
| 0.4517 | 0.92 | 27000 | 0.4930 | 0.8062 |
| 0.4413 | 0.95 | 28000 | 0.5307 | 0.7964 |
| 0.4483 | 0.99 | 29000 | 0.5195 | 0.7938 |
| 0.4036 | 1.02 | 30000 | 0.5238 | 0.8029 |
| 0.3724 | 1.05 | 31000 | 0.5125 | 0.8082 |
| 0.3777 | 1.09 | 32000 | 0.5099 | 0.8075 |
| 0.3753 | 1.12 | 33000 | 0.5172 | 0.8053 |
| 0.367 | 1.15 | 34000 | 0.5188 | 0.8053 |
| 0.3819 | 1.19 | 35000 | 0.5218 | 0.8046 |
| 0.363 | 1.22 | 36000 | 0.5202 | 0.7993 |
| 0.3794 | 1.26 | 37000 | 0.5240 | 0.8048 |
| 0.3749 | 1.29 | 38000 | 0.5026 | 0.8054 |
| 0.367 | 1.32 | 39000 | 0.5198 | 0.8075 |
| 0.3759 | 1.36 | 40000 | 0.5298 | 0.7993 |
| 0.3701 | 1.39 | 41000 | 0.5072 | 0.8091 |
| 0.3742 | 1.43 | 42000 | 0.5071 | 0.8098 |
| 0.3706 | 1.46 | 43000 | 0.5317 | 0.8037 |
| 0.3716 | 1.49 | 44000 | 0.5034 | 0.8052 |
| 0.3717 | 1.53 | 45000 | 0.5258 | 0.8012 |
| 0.3714 | 1.56 | 46000 | 0.5195 | 0.8050 |
| 0.3781 | 1.6 | 47000 | 0.5004 | 0.8104 |
| 0.3725 | 1.63 | 48000 | 0.5124 | 0.8113 |
| 0.3624 | 1.66 | 49000 | 0.5040 | 0.8094 |
| 0.3657 | 1.7 | 50000 | 0.4979 | 0.8111 |
| 0.3669 | 1.73 | 51000 | 0.4968 | 0.8100 |
| 0.3636 | 1.77 | 52000 | 0.5075 | 0.8079 |
| 0.36 | 1.8 | 53000 | 0.4985 | 0.8110 |
| 0.3624 | 1.83 | 54000 | 0.5125 | 0.8070 |
| 0.366 | 1.87 | 55000 | 0.4918 | 0.8117 |
| 0.3655 | 1.9 | 56000 | 0.5051 | 0.8109 |
| 0.3609 | 1.94 | 57000 | 0.5083 | 0.8105 |
| 0.3672 | 1.97 | 58000 | 0.5129 | 0.8085 |
| 0.3545 | 2.0 | 59000 | 0.5467 | 0.8109 |
| 0.2938 | 2.04 | 60000 | 0.5635 | 0.8049 |
| 0.29 | 2.07 | 61000 | 0.5781 | 0.8041 |
| 0.2992 | 2.11 | 62000 | 0.5470 | 0.8077 |
| 0.2957 | 2.14 | 63000 | 0.5765 | 0.8073 |
| 0.292 | 2.17 | 64000 | 0.5472 | 0.8106 |
| 0.2893 | 2.21 | 65000 | 0.5590 | 0.8085 |
| 0.2883 | 2.24 | 66000 | 0.5535 | 0.8064 |
| 0.2923 | 2.28 | 67000 | 0.5508 | 0.8095 |
| 0.2868 | 2.31 | 68000 | 0.5679 | 0.8098 |
| 0.2892 | 2.34 | 69000 | 0.5660 | 0.8057 |
| 0.292 | 2.38 | 70000 | 0.5494 | 0.8088 |
| 0.286 | 2.41 | 71000 | 0.5653 | 0.8085 |
| 0.2939 | 2.45 | 72000 | 0.5673 | 0.8070 |
| 0.286 | 2.48 | 73000 | 0.5600 | 0.8092 |
| 0.2844 | 2.51 | 74000 | 0.5508 | 0.8095 |
| 0.2913 | 2.55 | 75000 | 0.5645 | 0.8088 |
| 0.2859 | 2.58 | 76000 | 0.5677 | 0.8095 |
| 0.2892 | 2.62 | 77000 | 0.5598 | 0.8113 |
| 0.2898 | 2.65 | 78000 | 0.5618 | 0.8096 |
| 0.2814 | 2.68 | 79000 | 0.5664 | 0.8103 |
| 0.2917 | 2.72 | 80000 | 0.5484 | 0.8122 |
| 0.2907 | 2.75 | 81000 | 0.5522 | 0.8116 |
| 0.2896 | 2.79 | 82000 | 0.5540 | 0.8093 |
| 0.2907 | 2.82 | 83000 | 0.5469 | 0.8104 |
| 0.2882 | 2.85 | 84000 | 0.5471 | 0.8122 |
| 0.2878 | 2.89 | 85000 | 0.5532 | 0.8108 |
| 0.2858 | 2.92 | 86000 | 0.5511 | 0.8115 |
| 0.288 | 2.96 | 87000 | 0.5491 | 0.8111 |
| 0.2834 | 2.99 | 88000 | 0.5541 | 0.8111 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
976 | emrecan/convbert-base-turkish-mc4-cased-multinli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
977 | emrecan/convbert-base-turkish-mc4-cased-snli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
978 | emrecan/distilbert-base-turkish-cased-allnli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-turkish-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6481
- Accuracy: 0.7381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.94 | 0.03 | 1000 | 0.9074 | 0.5813 |
| 0.8102 | 0.07 | 2000 | 0.8802 | 0.5949 |
| 0.7737 | 0.1 | 3000 | 0.8491 | 0.6155 |
| 0.7576 | 0.14 | 4000 | 0.8283 | 0.6261 |
| 0.7286 | 0.17 | 5000 | 0.8150 | 0.6362 |
| 0.7162 | 0.2 | 6000 | 0.7998 | 0.6400 |
| 0.7092 | 0.24 | 7000 | 0.7830 | 0.6565 |
| 0.6962 | 0.27 | 8000 | 0.7653 | 0.6629 |
| 0.6876 | 0.31 | 9000 | 0.7630 | 0.6687 |
| 0.6778 | 0.34 | 10000 | 0.7475 | 0.6739 |
| 0.6737 | 0.37 | 11000 | 0.7495 | 0.6781 |
| 0.6712 | 0.41 | 12000 | 0.7350 | 0.6826 |
| 0.6559 | 0.44 | 13000 | 0.7274 | 0.6897 |
| 0.6493 | 0.48 | 14000 | 0.7248 | 0.6902 |
| 0.6483 | 0.51 | 15000 | 0.7263 | 0.6858 |
| 0.6445 | 0.54 | 16000 | 0.7070 | 0.6978 |
| 0.6467 | 0.58 | 17000 | 0.7083 | 0.6981 |
| 0.6332 | 0.61 | 18000 | 0.6996 | 0.7004 |
| 0.6288 | 0.65 | 19000 | 0.6979 | 0.6978 |
| 0.6308 | 0.68 | 20000 | 0.6912 | 0.7040 |
| 0.622 | 0.71 | 21000 | 0.6904 | 0.7092 |
| 0.615 | 0.75 | 22000 | 0.6872 | 0.7094 |
| 0.6186 | 0.78 | 23000 | 0.6877 | 0.7075 |
| 0.6183 | 0.82 | 24000 | 0.6818 | 0.7111 |
| 0.6115 | 0.85 | 25000 | 0.6856 | 0.7122 |
| 0.608 | 0.88 | 26000 | 0.6697 | 0.7179 |
| 0.6071 | 0.92 | 27000 | 0.6727 | 0.7181 |
| 0.601 | 0.95 | 28000 | 0.6798 | 0.7118 |
| 0.6018 | 0.99 | 29000 | 0.6854 | 0.7071 |
| 0.5762 | 1.02 | 30000 | 0.6697 | 0.7214 |
| 0.5507 | 1.05 | 31000 | 0.6710 | 0.7185 |
| 0.5575 | 1.09 | 32000 | 0.6709 | 0.7226 |
| 0.5493 | 1.12 | 33000 | 0.6659 | 0.7191 |
| 0.5464 | 1.15 | 34000 | 0.6709 | 0.7232 |
| 0.5595 | 1.19 | 35000 | 0.6642 | 0.7220 |
| 0.5446 | 1.22 | 36000 | 0.6709 | 0.7202 |
| 0.5524 | 1.26 | 37000 | 0.6751 | 0.7148 |
| 0.5473 | 1.29 | 38000 | 0.6642 | 0.7209 |
| 0.5477 | 1.32 | 39000 | 0.6662 | 0.7223 |
| 0.5522 | 1.36 | 40000 | 0.6586 | 0.7227 |
| 0.5406 | 1.39 | 41000 | 0.6602 | 0.7258 |
| 0.54 | 1.43 | 42000 | 0.6564 | 0.7273 |
| 0.5458 | 1.46 | 43000 | 0.6780 | 0.7213 |
| 0.5448 | 1.49 | 44000 | 0.6561 | 0.7235 |
| 0.5418 | 1.53 | 45000 | 0.6600 | 0.7253 |
| 0.5408 | 1.56 | 46000 | 0.6616 | 0.7274 |
| 0.5451 | 1.6 | 47000 | 0.6557 | 0.7283 |
| 0.5385 | 1.63 | 48000 | 0.6583 | 0.7295 |
| 0.5261 | 1.66 | 49000 | 0.6468 | 0.7325 |
| 0.5364 | 1.7 | 50000 | 0.6447 | 0.7329 |
| 0.5294 | 1.73 | 51000 | 0.6429 | 0.7320 |
| 0.5332 | 1.77 | 52000 | 0.6508 | 0.7272 |
| 0.5274 | 1.8 | 53000 | 0.6492 | 0.7326 |
| 0.5286 | 1.83 | 54000 | 0.6470 | 0.7318 |
| 0.5359 | 1.87 | 55000 | 0.6393 | 0.7354 |
| 0.5366 | 1.9 | 56000 | 0.6445 | 0.7367 |
| 0.5296 | 1.94 | 57000 | 0.6413 | 0.7313 |
| 0.5346 | 1.97 | 58000 | 0.6393 | 0.7315 |
| 0.5264 | 2.0 | 59000 | 0.6448 | 0.7357 |
| 0.4857 | 2.04 | 60000 | 0.6640 | 0.7335 |
| 0.4888 | 2.07 | 61000 | 0.6612 | 0.7318 |
| 0.4964 | 2.11 | 62000 | 0.6516 | 0.7337 |
| 0.493 | 2.14 | 63000 | 0.6503 | 0.7356 |
| 0.4961 | 2.17 | 64000 | 0.6519 | 0.7348 |
| 0.4847 | 2.21 | 65000 | 0.6517 | 0.7327 |
| 0.483 | 2.24 | 66000 | 0.6555 | 0.7310 |
| 0.4857 | 2.28 | 67000 | 0.6525 | 0.7312 |
| 0.484 | 2.31 | 68000 | 0.6444 | 0.7342 |
| 0.4792 | 2.34 | 69000 | 0.6508 | 0.7330 |
| 0.488 | 2.38 | 70000 | 0.6513 | 0.7344 |
| 0.472 | 2.41 | 71000 | 0.6547 | 0.7346 |
| 0.4872 | 2.45 | 72000 | 0.6500 | 0.7342 |
| 0.4782 | 2.48 | 73000 | 0.6585 | 0.7358 |
| 0.481 | 2.51 | 74000 | 0.6477 | 0.7356 |
| 0.4822 | 2.55 | 75000 | 0.6587 | 0.7346 |
| 0.4728 | 2.58 | 76000 | 0.6572 | 0.7340 |
| 0.4841 | 2.62 | 77000 | 0.6443 | 0.7374 |
| 0.4885 | 2.65 | 78000 | 0.6494 | 0.7362 |
| 0.4752 | 2.68 | 79000 | 0.6509 | 0.7382 |
| 0.4883 | 2.72 | 80000 | 0.6457 | 0.7371 |
| 0.4888 | 2.75 | 81000 | 0.6497 | 0.7364 |
| 0.4844 | 2.79 | 82000 | 0.6481 | 0.7376 |
| 0.4833 | 2.82 | 83000 | 0.6451 | 0.7389 |
| 0.48 | 2.85 | 84000 | 0.6423 | 0.7373 |
| 0.4832 | 2.89 | 85000 | 0.6477 | 0.7357 |
| 0.4805 | 2.92 | 86000 | 0.6464 | 0.7379 |
| 0.4775 | 2.96 | 87000 | 0.6477 | 0.7380 |
| 0.4843 | 2.99 | 88000 | 0.6481 | 0.7381 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
979 | emrecan/distilbert-base-turkish-cased-multinli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
980 | emrecan/distilbert-base-turkish-cased-snli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
981 | erst/xlm-roberta-base-finetuned-db07 | [
"011100",
"011200",
"011300",
"011400",
"011500",
"011600",
"011900",
"012100",
"012200",
"012300",
"012400",
"012500",
"012600",
"012700",
"012800",
"012900",
"013000",
"014100",
"014200",
"014300",
"014400",
"014500",
"014610",
"014620",
"014700",
"014910",
"014... | # Classifying Text into DB07 Codes
This model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned to classify Danish descriptions of activities into [Dansk Branchekode DB07](https://www.dst.dk/en/Statistik/dokumentation/nomenklaturer/dansk-branchekode-db07) codes.
## Data
Approximately 2.5 million business names and descriptions of activities from Norwegian and Danish businesses were used to fine-tune the model. The Norwegian descriptions were translated into Danish and the Norwegian SN 2007 codes were translated into Danish DB07 codes.
Activity descriptions and business names were concatenated but separated by the separator token `</s>`. Thus, the model was trained on input texts in the format `f"{description_of_activity}</s>{business_name}"`.
## Quick Start
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("erst/xlm-roberta-base-finetuned-db07")
model = AutoModelForSequenceClassification.from_pretrained("erst/xlm-roberta-base-finetuned-db07")
pl = pipeline(
"sentiment-analysis",
model=model,
tokenizer=tokenizer,
return_all_scores=False,
)
pl("Vi sælger sko")
pl("We sell clothes</s>Clothing ApS")
```
|
982 | erst/xlm-roberta-base-finetuned-nace | [
"0111",
"0112",
"0113",
"0114",
"0115",
"0116",
"0119",
"0121",
"0122",
"0123",
"0124",
"0125",
"0126",
"0127",
"0128",
"0129",
"0130",
"0141",
"0142",
"0143",
"0144",
"0145",
"0146",
"0147",
"0149",
"0150",
"0161",
"0162",
"0163",
"0164",
"0170",
"0210"... | # Classifying Text into NACE Codes
This model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned to classify descriptions of activities into [NACE Rev. 2](https://ec.europa.eu/eurostat/web/nace-rev2) codes.
## Data
The data used to fine-tune the model consist of 2.5 million descriptions of activities from Norwegian and Danish businesses. To improve the model's multilingual performance, random samples of the Norwegian and Danish descriptions were machine translated into the following languages:
- English
- German
- Spanish
- French
- Finnish
- Polish
## Quick Start
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("erst/xlm-roberta-base-finetuned-nace")
model = AutoModelForSequenceClassification.from_pretrained("erst/xlm-roberta-base-finetuned-nace")
pl = pipeline(
"sentiment-analysis",
model=model,
tokenizer=tokenizer,
return_all_scores=False,
)
pl("The purpose of our company is to build houses")
```
|
983 | ethanyt/guwen-cls | [
"易藏",
"医藏",
"艺藏",
"史藏",
"佛藏",
"集藏",
"诗藏",
"子藏",
"儒藏",
"道藏"
] | ---
language:
- "zh"
thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "pytorch"
- "text classificatio"
license: "apache-2.0"
pipeline_tag: "text-classification"
widget:
- text: "子曰:“弟子入则孝,出则悌,谨而信,泛爱众,而亲仁。行有馀力,则以学文。”"
---
# Guwen CLS
A Classical Chinese Text Classifier.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.