modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
sherover125/newsclassifier | [
"اقتصادی",
"سیاسی",
"سایبری",
"گروهک های معاند",
"متفرقه",
"سلامت",
"فرهنگی",
"ورزشی",
"بینالملل",
"داخلی",
"اجتماعی",
"نظامی"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: newsclassifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newsclassifier
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1405
- Matthews Correlation: 0.9731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.2207 | 1.0 | 2397 | 0.1706 | 0.9595 |
| 0.0817 | 2.0 | 4794 | 0.1505 | 0.9663 |
| 0.0235 | 3.0 | 7191 | 0.1405 | 0.9731 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,554 |
suvrobaner/distilbert-base-uncased-finetuned-emotion-en-tweets | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
language: en
tags:
- text-classification
- pytorch
license: apache-2.0
datasets:
- emotion
---
```python
from transformers import pipeline
model_id = "suvrobaner/distilbert-base-uncased-finetuned-emotion-en-tweets"
classifier = pipeline("text-classification", model = model_id)
custom_tweet = "I saw a movie today and it was really good."
preds = classifier(custom_tweet, return_all_scores=True)
labels = ['sadness', 'joy', 'love', 'anger', 'fear', 'surprise']
preds_df = pd.DataFrame(preds[0])
import matplotlib.pyplot as plt
plt.bar(labels, 100 * preds_df["score"], color='C0')
plt.title(f'"{custom_tweet}"')
plt.ylabel("Class probability (%)")
plt.show()
```
| 673 |
pnichite/YTFineTuneBert | null | Entry not found | 15 |
Zaib/Vulnerability-detection | null | ---
tags:
- generated_from_trainer
model-index:
- name: Vulnerability-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Vulnerability-detection
This model is a fine-tuned version of [mrm8488/codebert-base-finetuned-detect-insecure-code](https://huggingface.co/mrm8488/codebert-base-finetuned-detect-insecure-code) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,198 |
Supreeth/DeBERTa-Twitter-Emotion-Classification | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: mit
---
# Label - Emotion Table
| Emotion | LABEL |
| -------------- |:-------------: |
| Anger | LABEL_0 |
| Boredom | LABEL_1 |
| Empty | LABEL_2 |
| Enthusiasm | LABEL_3 |
| Fear | LABEL_4 |
| Fun | LABEL_5 |
| Happiness | LABEL_6 |
| Hate | LABEL_7 |
| Joy | LABEL_8 |
| Love | LABEL_9 |
| Neutral | LABEL_10 |
| Relief | LABEL_11 |
| Sadness | LABEL_12 |
| Surprise | LABEL_13 |
| Worry | LABEL_14 |
| 679 |
naem1023/xlmlongformer-phrase-clause-classification-dev | null | ---
license: apache-2.0
---
| 28 |
Milian/bert_finetuning_test | [
"LABEL_0",
"LABEL_1"
] | Entry not found | 15 |
RecordedFuture/Swedish-Sentiment-Violence | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: sv
license: mit
---
## Swedish BERT models for sentiment analysis
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for sentiment analysis in Swedish. The two models are based on the [KB\/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) model and has been fine-tuned to solve a multi-label sentiment analysis task.
The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes.
The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Swedish-Sentiment-Fear
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
When the model and tokenizer are initialized the model can be used for inference.
#### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Hold an expressive emphasis on fear and/ or anxiety
#### The weak sentiment includes but are not limited to
Texts that:
- Express fear and/ or anxiety in a neutral way
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.45 | 0.8754 | 0.8618 | 0.8895 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
When the model and tokenizer are initialized the model can be used for inference.
### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Referencing highly violent acts
- Hold an aggressive tone
#### The weak sentiment includes but are not limited to
Texts that:
- Include general violent statements that do not fall under the strong sentiment
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.35 | 0.7677 | 0.7456 | 0.791 | | 3,299 |
anirudh21/albert-large-v2-finetuned-wnli | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-large-v2-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5352112676056338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2-finetuned-wnli
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- Accuracy: 0.5352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 17 | 0.7292 | 0.4366 |
| No log | 2.0 | 34 | 0.6919 | 0.5352 |
| No log | 3.0 | 51 | 0.7084 | 0.4648 |
| No log | 4.0 | 68 | 0.7152 | 0.5352 |
| No log | 5.0 | 85 | 0.7343 | 0.5211 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
| 1,836 |
anirudh21/albert-xlarge-v2-finetuned-mrpc | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: albert-xlarge-v2-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7132352941176471
- name: F1
type: f1
value: 0.8145800316957211
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xlarge-v2-finetuned-mrpc
This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5563
- Accuracy: 0.7132
- F1: 0.8146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.6898 | 0.5221 | 0.6123 |
| No log | 2.0 | 126 | 0.6298 | 0.6838 | 0.8122 |
| No log | 3.0 | 189 | 0.6043 | 0.7010 | 0.8185 |
| No log | 4.0 | 252 | 0.5834 | 0.7010 | 0.8146 |
| No log | 5.0 | 315 | 0.5563 | 0.7132 | 0.8146 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,983 |
anirudh21/albert-xlarge-v2-finetuned-wnli | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-xlarge-v2-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xlarge-v2-finetuned-wnli
This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6869
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6906 | 0.5070 |
| No log | 2.0 | 80 | 0.6869 | 0.5634 |
| No log | 3.0 | 120 | 0.6905 | 0.5352 |
| No log | 4.0 | 160 | 0.6960 | 0.4225 |
| No log | 5.0 | 200 | 0.7011 | 0.3803 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,840 |
bowipawan/bert-sentimental | [
"negative",
"neutral",
"positive"
] | For studying only | 17 |
cataremix15/distilbert-tiln-proj | null | Entry not found | 15 |
cemdenizsel/51k-pretrained-bert-model | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
eliza-dukim/roberta-large-second | [
"no_relation",
"org:alternate_names",
"org:dissolved",
"org:founded",
"org:founded_by",
"org:member_of",
"org:members",
"org:number_of_employees/members",
"org:place_of_headquarters",
"org:political/religious_affiliation",
"org:product",
"org:top_members/employees",
"per:alternate_names",
... | Entry not found | 15 |
ishan/distilbert-base-uncased-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: en
thumbnail:
tags:
- pytorch
- text-classification
datasets:
- MNLI
---
# distilbert-base-uncased finetuned on MNLI
## Model Details and Training Data
We used the pretrained model from [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset.
The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).
## Evaluation Results
The evaluation results are mentioned in the table below.
| Test Corpus | Accuracy |
|:---:|:---------:|
| Matched | 0.8223 |
| Mismatched | 0.8216 |
| 727 |
lschneidpro/distilbert_uncased_imdb | null | Entry not found | 15 |
mmcquade11/reviews-sentiment-analysis-two | null | Entry not found | 15 |
mrm8488/bert-base-german-dbmdz-cased-finetuned-pawsx-de | null | ---
language: de
datasets:
- xtreme
tags:
- nli
widget:
- text: "Winarsky ist Mitglied des IEEE, Phi Beta Kappa, des ACM und des Sigma Xi. Winarsky ist Mitglied des ACM, des IEEE, der Phi Beta Kappa und der Sigma Xi."
---
# bert-base-german-dbmdz-cased fine-tuned on PAWS-X-de for Paraphrase Identification (NLI)
| 314 |
yoshitomo-matsubara/bert-large-uncased-qnli | null | ---
language: en
tags:
- bert
- qnli
- glue
- torchdistill
license: apache-2.0
datasets:
- qnli
metrics:
- accuracy
---
`bert-large-uncased` fine-tuned on QNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qnli/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
| 828 |
KheireddineDaouadi/ZeroAraElectra | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: ar
tags:
- zero-shot-classification
- nli
- pytorch
datasets:
- xnli
pipeline_tag: zero-shot-classification
license: other
---
| 142 |
clisi2000/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9246284188099615
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8174 | 1.0 | 250 | 0.3166 | 0.905 | 0.9023 |
| 0.2534 | 2.0 | 500 | 0.2183 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cpu
- Datasets 1.16.1
- Tokenizers 0.10.1
| 1,805 |
abhishek/autonlp-swahili-sentiment-615517563 | [
"-1",
"0",
"1"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-swahili-sentiment
co2_eq_emissions: 1.9057858628956459
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 615517563
- CO2 Emissions (in grams): 1.9057858628956459
## Validation Metrics
- Loss: 0.6990908980369568
- Accuracy: 0.695364238410596
- Macro F1: 0.6088819062581828
- Micro F1: 0.695364238410596
- Weighted F1: 0.677326207350606
- Macro Precision: 0.6945099492363175
- Micro Precision: 0.695364238410596
- Weighted Precision: 0.6938596845881614
- Macro Recall: 0.5738408020723632
- Micro Recall: 0.695364238410596
- Weighted Recall: 0.695364238410596
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-swahili-sentiment-615517563
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-swahili-sentiment-615517563", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-swahili-sentiment-615517563", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,401 |
dannyvas23/clasificacion-texto-suicida-finetuned-amazon-review | null | ---
language: "es"
tags:
- generated_from_trainer
- sentiment
- emotion
widget:
- text: "no me gusta esta vida."
example_title: "Ejemplo 1"
- text: "odio estar ahi"
example_title: "Ejemplo 2"
- text: "me siento triste por no poder viajar"
example_title: "Ejemplo 3"
metrics:
- accuracy
model-index:
- name: clasificacion-texto-suicida-finetuned-amazon-review
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificacion-texto-suicida-finetuned-amazon-review
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1546
- Accuracy: 0.9488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1643 | 1.0 | 12022 | 0.1546 | 0.9488 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,663 |
Volodia/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9280089473757943
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2102
- Accuracy: 0.928
- F1: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8028 | 1.0 | 250 | 0.2998 | 0.913 | 0.9117 |
| 0.2314 | 2.0 | 500 | 0.2102 | 0.928 | 0.9280 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,804 |
antgoldbloom/distilbert-rater | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-rater
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rater
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
| 1,031 |
Wakaka/bert-finetuned-mrpc | null | Entry not found | 15 |
sabersol/bert-base-uncased-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: cc-by-nc-sa-4.0
---
# CITDA:
Fine-tuned `bert-base-uncased` on the `emotions` dataset
Demo Notebook: https://colab.research.google.com/drive/10ZCFvlf2UV3FjU4ymf4OoipQvqHbIItG?usp=sharing
## Packages
- Install `torch`
- Also, `pip install transformers datasets scikit-learn wandb seaborn python-dotenv`
## Train
1. Rename `.env.example` to `.env` and set an API key from [wandb](https://wandb.ai/authorize)
2. You can adjust model parameters in the `explainableai.py` file.
2. The model (`pytorch_model.bin`) is a based on the `bert-base-uncased` and already trained on the `emotions` dataset.
To re-produce the training run `finetune-emotions.py`. You can change the base model, or the dataset by changing that file's code.
## Example
Run `example.py`
## Train
The model is already trained on `bert-base-uncased` with the [emotions dataset](https://huggingface.co/datasets/emotion). However, you can change parameters and re-fine-tune the model by running `finetune-emotions.py`. | 1,005 |
Shenghao1993/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.929
- name: F1
type: f1
value: 0.9288515820399124
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2196
- Accuracy: 0.929
- F1: 0.9289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8486 | 1.0 | 250 | 0.3306 | 0.903 | 0.8989 |
| 0.2573 | 2.0 | 500 | 0.2196 | 0.929 | 0.9289 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,804 |
epomponio/my-finetuned-bert | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
ManqingLiu/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9305
- name: F1
type: f1
value: 0.9306050612701778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
- Accuracy: 0.9305
- F1: 0.9306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1755 | 1.0 | 250 | 0.1831 | 0.925 | 0.9249 |
| 0.1118 | 2.0 | 500 | 0.1709 | 0.9305 | 0.9306 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,807 |
alk/distilbert-base-uncased-finetuned-header-classifier | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-header-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-header-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,209 |
Gunulhona/tbnlimodel_v2 | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
PGT/graphnystromformer-l-artificial-balanced-max500-490000-0 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | Entry not found | 15 |
BlindMan820/Sarcastic-News-Headlines | null | ---
language:
- English
tags:
- Text
- Sequence-Classification
- Sarcasm
- DistilBert
datasets:
- Kaggle Dataset
metrics :
- precision
- recall
- f1
---
Dataset Link - https://www.kaggle.com/rmisra/news-headlines-dataset-for-sarcasm-detection | 250 |
CenIA/bert-base-spanish-wwm-uncased-finetuned-xnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
ClaudeYang/awesome_fb_model | [
"contradiction",
"entailment",
"neutral"
] | ---
pipeline_tag: zero-shot-classification
datasets:
- multi_nli
widget:
- text: "ETH"
candidate_labels: "Location & Address, Employment, Organizational, Name, Service, Studies, Science"
hypothesis_template: "This is {}."
---
ETH Zeroshot | 243 |
Elluran/Hate_speech_detector | [
"LABEL_0"
] | Entry not found | 15 |
Jodsa/camembert_clf | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
LilaBoualili/bert-sim-doc | null | Entry not found | 15 |
NDugar/v3large-2epoch | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` | 4,788 |
ayameRushia/roberta-base-indonesian-sentiment-analysis-smsa | [
"POSITIVE",
"NEUTRAL",
"NEGATIVE"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
model-index:
- name: roberta-base-indonesian-sentiment-analysis-smsa
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9349206349206349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-indonesian-sentiment-analysis-smsa
This model is a fine-tuned version of [flax-community/indonesian-roberta-base](https://huggingface.co/flax-community/indonesian-roberta-base) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4252
- Accuracy: 0.9349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7582 | 1.0 | 688 | 0.3280 | 0.8786 |
| 0.3225 | 2.0 | 1376 | 0.2398 | 0.9206 |
| 0.2057 | 3.0 | 2064 | 0.2574 | 0.9230 |
| 0.1642 | 4.0 | 2752 | 0.2820 | 0.9302 |
| 0.1266 | 5.0 | 3440 | 0.3344 | 0.9317 |
| 0.0608 | 6.0 | 4128 | 0.3543 | 0.9341 |
| 0.058 | 7.0 | 4816 | 0.4252 | 0.9349 |
| 0.0315 | 8.0 | 5504 | 0.4736 | 0.9310 |
| 0.0166 | 9.0 | 6192 | 0.4649 | 0.9349 |
| 0.0143 | 10.0 | 6880 | 0.4648 | 0.9341 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 2,266 |
boychaboy/kobias_klue-bert-base | [
"biased",
"none"
] | Entry not found | 15 |
deeq/dbert-sentiment | [
"0",
"1"
] | ```
from transformers import BertForSequenceClassification, BertTokenizer, TextClassificationPipeline
model = BertForSequenceClassification.from_pretrained("deeq/dbert-sentiment")
tokenizer = BertTokenizer.from_pretrained("deeq/dbert")
nlp = TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(nlp("좋아요"))
print(nlp("글쎄요"))
```
| 343 |
monologg/koelectra-small-finetuned-sentiment | [
"negative",
"positive"
] | Entry not found | 15 |
cambridgeltl/sst_bert-base-uncased | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
xaqren/sentiment_analysis | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- Confidential
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model description [xaqren/sentiment_analysis]
This is a fine-tuned downstream version of the bert-base-uncased model for sentiment analysis, this model is not intended for
further downstream fine-tuning for any other tasks. This model is trained on a classified dataset for text-classification. | 2,216 |
Intel/bart-large-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bart-large-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8774509803921569
- name: F1
type: f1
value: 0.9119718309859154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-mrpc
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5684
- Accuracy: 0.8775
- F1: 0.9120
- Combined Score: 0.8947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
| 1,510 |
Philip-Jan/finetuning-sentiment-model-3000-samples | [
"neg",
"pos"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8646864686468646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3328
- Accuracy: 0.8633
- F1: 0.8647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,519 |
Wakaka/bert-finetuned-imdb | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: bert-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-imdb
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5591
- Accuracy: 0.866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.4995 | 0.79 |
| No log | 2.0 | 250 | 0.4000 | 0.854 |
| No log | 3.0 | 375 | 0.5591 | 0.866 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,675 |
LiYuan/Amazon-Cup-Cross-Encoder-Regression | [
"LABEL_0"
] | ---
license: afl-3.0
---
This model is actually very accurate for this rerank products given one query, intuitively inspired by information retrieval techniques. In 2019, Nils Reimers and Iryna Gurevych introduced a new transformers model called Sentence-BERT, Sentence Embeddings using Siamese BERT-Networks. The model is introduced by this paper https://doi.org/10.48550/arxiv.1908.10084.
This new Sentence-BERT model is modified on the BERT model by adding a pooling operation to the output of BERT model. In such a way, it can output a fixed size of the sentence embedding to calculate cosine similarity, and so on. To obtain a meaningful sentence embedding in a sentence vector space where similar or pairwise sentence embedding are close, they created a triplet network to modify the BERT model as the architecture below figure.

# Download and Use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("LiYuan/Amazon-Cup-Cross-Encoder-Regression")
model = AutoModelForSequenceClassification.from_pretrained("LiYuan/Amazon-Cup-Cross-Encoder-Regression")
```
As we can observe from above figure, a pooling layer is added on the top of each BERT Model to obtain the sentence embedding $u$ and $v$. Finally, the cosine similarity between $u$ and $v$ can be computed to compare with the true score or make them semantically meaningful, then the mean square error loss, which is the objective function, can be backpropagated through this BERT network model to update the weights.
In our amazon case, the query is sentence A and concatenated product attributes are sentence B. We also stratified split the merged set into **571,223** rows for training, **500** rows for validation, **3,000** rows for test. We limited the output score between 0 and 1. The following scores represent the degree of relevance between the query and the product attributes in light of Amazon KDD Cup website; however, this can be adjusted to improve the model performance.
- 1: exact
- 0.1: substitute
- 0.01: complement
- 0: irrelevance
For this regression model, we used Pearson correlation coefficient and Spearman's rank correlation coefficient} to measure the model performance. If the correlation coefficient is high, the model performs well. The validation Pearson is \textbf{0.5670} and validation Spearman is \textbf{0.5662}. This is not bad result.
We also evaluated the model on the test set. We got **0.5321** for Pearson and **0.5276** for Spearman. These results from the test evaluation have results similar to those of the validation set, suggesting that the model has a good generalization.
Finally, once we have this fine-tuned Cross-Encoder Regression model, given a new query and its matched product list, we can feed them into this model to get the output score to rerank them so that this can improve the customer online shopping experience. | 2,945 |
LianZhang/finetuning-sentiment-model-3000-samples | [
"neg",
"pos"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8754208754208754
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3182
- Accuracy: 0.8767
- F1: 0.8754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,521 |
Willy/bert-base-spanish-wwm-cased-finetuned-NLP-IE-4 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-NLP-IE-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-NLP-IE-4
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7825
- Accuracy: 0.4931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7005 | 1.0 | 9 | 0.6977 | 0.5069 |
| 0.65 | 2.0 | 18 | 0.7035 | 0.4861 |
| 0.6144 | 3.0 | 27 | 0.7189 | 0.4722 |
| 0.5898 | 4.0 | 36 | 0.7859 | 0.4861 |
| 0.561 | 5.0 | 45 | 0.7825 | 0.4931 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,654 |
projecte-aina/roberta-base-ca-v2-cased-sts | [
"SIMILARITY"
] | ---
pipeline_tag: text-classification
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "semantic textual similarity"
- "sts-ca"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/sts-ca"
metrics:
- "combined_score"
model-index:
- name: roberta-base-ca-v2-cased-sts
results:
- task:
type: text-classification
dataset:
type: projecte-aina/sts-ca
name: STS-ca
metrics:
- name: Combined score
type: combined_score
value: 0.7907
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Semantic Textual Similarity.
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
## Model description
The **roberta-base-ca-v2-cased-sts** is a Semantic Textual Similarity (STS) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended Uses and Limitations
**roberta-base-ca-v2-cased-sts** model can be used to assess the similarity between two snippets of text. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
To get the correct<sup>1</sup> model's prediction scores with values between 0.0 and 5.0, use the following code:
```python
from transformers import pipeline, AutoTokenizer
from scipy.special import logit
model = 'projecte-aina/roberta-base-ca-v2-cased-sts'
tokenizer = AutoTokenizer.from_pretrained(model)
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer)
def prepare(sentence_pairs):
sentence_pairs_prep = []
for s1, s2 in sentence_pairs:
sentence_pairs_prep.append(f"{tokenizer.cls_token} {s1}{tokenizer.sep_token}{tokenizer.sep_token} {s2}{tokenizer.sep_token}")
return sentence_pairs_prep
sentence_pairs = [("El llibre va caure per la finestra.", "El llibre va sortir volant."),
("M'agrades.", "T'estimo."),
("M'agrada el sol i la calor", "A la Garrotxa plou molt.")]
predictions = pipe(prepare(sentence_pairs), add_special_tokens=False)
# convert back to scores to the original 0 and 5 interval
for prediction in predictions:
prediction['score'] = logit(prediction['score'])
print(predictions)
```
Expected output:
```
[{'label': 'SIMILARITY', 'score': 2.118301674983813},
{'label': 'SIMILARITY', 'score': 2.1799755855125853},
{'label': 'SIMILARITY', 'score': 0.9511617858568939}]
```
<sup>1</sup> _**avoid using the widget** scores since they are normalized and do not reflect the original annotation values._
## Training
### Training data
We used the STS dataset in Catalan called [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) for training and evaluation.
### Training Procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set, and then evaluated it on the test set.
## Evaluation
### Variable and Metrics
This model was finetuned maximizing the average score between the Pearson and Spearman correlations.
## Evaluation results
We evaluated the _roberta-base-ca-v2-cased-sts_ on the STS-ca test set against standard multilingual and monolingual baselines:
| Model | STS-ca (Combined score) |
| ------------|:-------------|
| roberta-base-ca-v2-cased-sts | 79.07 |
| roberta-base-ca-cased-sts | **80.19** |
| mBERT | 74.26 |
| XLM-RoBERTa | 61.61 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Contributions
[N/A] | 5,727 |
amanbawa96/bert-base-uncase-contracts | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | Bert Base Uncased Contract model trained on CUAD Dataset
The Dataset can be downloaded from [Here](https://www.atticusprojectai.org/cuad). | 139 |
ccarvajal/beto-emoji | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
language:
- es
---
# beto-emoji
Fine-tunning [BETO](https://github.com/dccuchile/beto) for emoji-prediction.
## Repository
Details with training and a use example are shown in [github.com/camilocarvajalreyes/beto-emoji](https://github.com/camilocarvajalreyes/beto-emoji). A deeper analysis of this and other models on the full dataset can be found in [github.com/furrutiav/data-mining-2022](https://github.com/furrutiav/data-mining-2022). We have used this model for a project for [CC5205 Data Mining](https://github.com/dccuchile/CC5205) course.
## Example
Inspired by model card from [cardiffnlp/twitter-roberta-base-emoji](https://huggingface.co/cardiffnlp/twitter-roberta-base-emoji).
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"ccarvajal/beto-emoji"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/camilocarvajalreyes/beto-emoji/main/es_mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "que viva españa"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output
```python
1) 🇪🇸 0.2508
2) 😍 0.238
3) 👌 0.2225
4) 😂 0.0806
5) ❤ 0.0489
6) 😁 0.0415
7) 😜 0.0232
8) 😎 0.0229
9) 😊 0.0156
10) 😉 0.0119
11) 💜 0.0079
12) 💕 0.0077
13) 💪 0.0066
14) 💘 0.0054
15) 💙 0.0052
16) 💞 0.005
17) 😘 0.0034
18) 🎶 0.0022
19) ✨ 0.0007
```
## Results in test set
precision recall f1-score support
❤ 0.39 0.43 0.41 2141
😍 0.29 0.39 0.33 1408
😂 0.51 0.51 0.51 1499
💕 0.09 0.05 0.06 352
😊 0.12 0.23 0.16 514
😘 0.24 0.23 0.24 397
💪 0.37 0.43 0.40 307
😉 0.15 0.17 0.16 453
👌 0.09 0.16 0.11 180
🇪🇸 0.46 0.46 0.46 424
😎 0.12 0.11 0.11 339
💙 0.36 0.02 0.04 413
💜 0.00 0.00 0.00 235
😜 0.04 0.02 0.02 274
💞 0.00 0.00 0.00 93
✨ 0.26 0.12 0.17 416
🎶 0.25 0.24 0.24 212
💘 0.00 0.00 0.00 134
😁 0.05 0.03 0.04 209
accuracy 0.30 10000
macro_avg 0.20 0.19 0.18 10000
weighted avg 0.29 0.30 0.29 10000
[Another example](https://github.com/camilocarvajalreyes/beto-emoji/blob/main/attention_visualisation.ipynb) with a visualisation of the attention modules within this model is carried out using [bertviz](https://github.com/jessevig/bertviz).
## Reproducibility
The Multilingual Emoji Prediction dataset (Barbieri et al. 2010) consists of tweets in English and Spanish that originally had a single emoji, which is later used as a tag. Test and trial sets can be downloaded [here](https://github.com/fvancesco/Semeval2018-Task2-Emoji-Detection/blob/master/dataset/Semeval2018-Task2-EmojiPrediction.zip?raw=true), but the train set needs to be downloaded using a [twitter crawler](https://github.com/fra82/twitter-crawler/blob/master/semeval2018task2TwitterCrawlerHOWTO.md). The goal is to predict that single emoji that was originally in the tweet using the text in it (out of a fixed set of possible emojis, 20 for English and 19 for Spanish).
Training parameters:
```python
training_args = TrainingArguments(
output_dir="./results",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=5,
weight_decay=0.01
)
```
| 4,878 |
bhadresh-savani/bertweet-base-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: bertweet-base-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.929
- name: F1
type: f1
value: 0.9295613935787139
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.925
verified: true
- name: Precision Macro
type: precision
value: 0.8722017563353339
verified: true
- name: Precision Micro
type: precision
value: 0.925
verified: true
- name: Precision Weighted
type: precision
value: 0.9283646705517916
verified: true
- name: Recall Macro
type: recall
value: 0.8982480793145559
verified: true
- name: Recall Micro
type: recall
value: 0.925
verified: true
- name: Recall Weighted
type: recall
value: 0.925
verified: true
- name: F1 Macro
type: f1
value: 0.883488774573809
verified: true
- name: F1 Micro
type: f1
value: 0.925
verified: true
- name: F1 Weighted
type: f1
value: 0.9259820821054494
verified: true
- name: loss
type: loss
value: 0.18158096075057983
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-finetuned-emotion
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1737
- Accuracy: 0.929
- F1: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9469 | 1.0 | 250 | 0.3643 | 0.895 | 0.8921 |
| 0.2807 | 2.0 | 500 | 0.2173 | 0.9245 | 0.9252 |
| 0.1749 | 3.0 | 750 | 0.1859 | 0.926 | 0.9266 |
| 0.1355 | 4.0 | 1000 | 0.1737 | 0.929 | 0.9296 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 3,072 |
Cameron/BERT-eec-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
CenIA/bert-base-spanish-wwm-uncased-finetuned-mldoc | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
EMBEDDIA/sloberta-tweetsentiment | [
"Negative",
"Neutral",
"Positive"
] | Entry not found | 15 |
JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector | [
"AGAINST",
"FAVOR",
"NEUTRAL"
] | ---
license: apache-2.0
language: ca
tags:
- "catalan"
datasets:
- catalonia_independence
metrics:
- accuracy
model-index:
- name: roberta-base-ca-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: catalonia_independence
type: catalonia_independence
args: catalan
metrics:
- name: Accuracy
type: accuracy
value: 0.7611940298507462
widget:
- text: "Puigdemont, a l'estat espanyol: Quatre anys després, ens hem guanyat el dret a dir prou"
- text: "Llarena demana la detenció de Comín i Ponsatí aprofitant que són a Itàlia amb Puigdemont"
- text: "Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina com el doble sentiment identitari. És a dir, se senten tant catalans com espanyols. 1 de cada cinc, en canvi, té un sentiment excloent, només se senten catalans, i un 4% sol espanyol."
---
# roberta-base-ca-finetuned-catalonia-independence-detector
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the catalonia_independence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6065
- Accuracy: 0.7612
<details>
## Training and evaluation data
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 377 | 0.6311 | 0.7453 |
| 0.7393 | 2.0 | 754 | 0.6065 | 0.7612 |
| 0.5019 | 3.0 | 1131 | 0.6340 | 0.7547 |
| 0.3837 | 4.0 | 1508 | 0.6777 | 0.7597 |
| 0.3837 | 5.0 | 1885 | 0.7232 | 0.7582 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector"
independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
independence_analysis(
"Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina com el doble sentiment identitari. És a dir, se senten tant catalans com espanyols. 1 de cada cinc, en canvi, té un sentiment excloent, només se senten catalans, i un 4% sol espanyol."
)
# Output:
[{'label': 'AGAINST', 'score': 0.7457581758499146}]
independence_analysis(
"Llarena demana la detenció de Comín i Ponsatí aprofitant que són a Itàlia amb Puigdemont"
)
# Output:
[{'label': 'NEUTRAL', 'score': 0.7436802983283997}]
independence_analysis(
"Puigdemont, a l'estat espanyol: Quatre anys després, ens hem guanyat el dret a dir prou"
)
# Output:
[{'label': 'FAVOR', 'score': 0.9040119647979736}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(CATALAN).ipynb#scrollTo=j29NHJtOyAVU)
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;)
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) | 4,018 |
M-FAC/bert-tiny-finetuned-sst2 | null | # BERT-tiny model finetuned with M-FAC
This model is finetuned on SST-2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on SST-2 validation set:
```bash
accuracy = 83.02
```
Mean and standard deviation for 5 runs on SST-2 validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 80.11 ± 0.65 |
| M-FAC | 81.86 ± 0.76 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 42 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name sst2 \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 3 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| 2,730 |
RavenK/bert-base-uncased-sst2 | null | Entry not found | 15 |
Rexhaif/rubert-base-srl | [
"инструмент",
"каузатор",
"экспериенцер"
] | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: rubert-base-srl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-srl
This model is a fine-tuned version of [./ruBert-base/](https://huggingface.co/./ruBert-base/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2429
- F1: 0.9563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5816 | 1.0 | 57 | 0.3865 | 0.8371 |
| 0.3685 | 2.0 | 114 | 0.1707 | 0.9325 |
| 0.1057 | 3.0 | 171 | 0.0972 | 0.9563 |
| 0.0964 | 4.0 | 228 | 0.1429 | 0.9775 |
| 0.1789 | 5.0 | 285 | 0.2493 | 0.9457 |
| 0.0016 | 6.0 | 342 | 0.1900 | 0.6349 |
| 0.0013 | 7.0 | 399 | 0.2060 | 0.9563 |
| 0.0008 | 8.0 | 456 | 0.2321 | 0.9563 |
| 0.0006 | 9.0 | 513 | 0.2412 | 0.9563 |
| 0.0006 | 10.0 | 570 | 0.2429 | 0.9563 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,861 |
Smone55/autonlp-au_topics-452311620 | [
"-1",
"0",
"1",
"10",
"100",
"101",
"102",
"103",
"104",
"105",
"106",
"107",
"108",
"109",
"11",
"110",
"111",
"112",
"113",
"114",
"115",
"116",
"117",
"118",
"119",
"12",
"120",
"121",
"122",
"123",
"124",
"125",
"13",
"14",
"15",
"16",
"17"... | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Smone55/autonlp-data-au_topics
co2_eq_emissions: 208.0823957145878
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 452311620
- CO2 Emissions (in grams): 208.0823957145878
## Validation Metrics
- Loss: 0.5259971022605896
- Accuracy: 0.8767479025169796
- Macro F1: 0.8618813750734912
- Micro F1: 0.8767479025169796
- Weighted F1: 0.8742964006840133
- Macro Precision: 0.8627700506991158
- Micro Precision: 0.8767479025169796
- Weighted Precision: 0.8755603985289852
- Macro Recall: 0.8662183006750934
- Micro Recall: 0.8767479025169796
- Weighted Recall: 0.8767479025169796
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Smone55/autonlp-au_topics-452311620
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Smone55/autonlp-au_topics-452311620", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Smone55/autonlp-au_topics-452311620", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,368 |
TransQuest/monotransquest-da-si_en-wiki | [
"LABEL_0"
] | ---
language: si-en
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-si_en-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 5,401 |
akahana/indonesia-emotion-roberta | [
"SEDIH",
"MARAH",
"CINTA",
"TAKUT",
"BAHAGIA"
] | ---
language: "id"
widget:
- text: "dia orang yang baik ya bunds."
---
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/indonesia-emotion-roberta"
emotion = pipeline('text-classification',
model=path,device=0)
set_seed(42)
kalimat = "dia orang yang baik ya bunds."
preds = emotion(kalimat)
preds
[{'label': 'BAHAGIA', 'score': 0.8790940046310425}]
``` | 436 |
amazon-sagemaker-community/xlm-roberta-en-ru-emoji-v2 | [
"☀",
"☹️",
"✨",
"❤",
"🇺🇸",
"🎄",
"💕",
"💙",
"💜",
"💢",
"💯",
"📷",
"📸",
"🔥",
"😁",
"😂",
"😉",
"😊",
"😍",
"😎",
"😔",
"😘",
"😜",
"😠",
"😡",
"😤",
"😩",
"😭",
"😳",
"🙃",
"🙄",
"🙈"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-en-ru-emoji-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-en-ru-emoji-v2
This model is a fine-tuned version of [DeepPavlov/xlm-roberta-large-en-ru](https://huggingface.co/DeepPavlov/xlm-roberta-large-en-ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3356
- Accuracy: 0.3102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.4 | 200 | 3.0592 | 0.1204 |
| No log | 0.81 | 400 | 2.5356 | 0.2480 |
| 2.6294 | 1.21 | 600 | 2.4570 | 0.2569 |
| 2.6294 | 1.62 | 800 | 2.3332 | 0.2832 |
| 1.9286 | 2.02 | 1000 | 2.3354 | 0.2803 |
| 1.9286 | 2.42 | 1200 | 2.3610 | 0.2881 |
| 1.9286 | 2.83 | 1400 | 2.3004 | 0.2973 |
| 1.7312 | 3.23 | 1600 | 2.3619 | 0.3026 |
| 1.7312 | 3.64 | 1800 | 2.3596 | 0.3032 |
| 1.5816 | 4.04 | 2000 | 2.2972 | 0.3072 |
| 1.5816 | 4.44 | 2200 | 2.3077 | 0.3073 |
| 1.5816 | 4.85 | 2400 | 2.3356 | 0.3102 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| 2,106 |
anirudh21/albert-large-v2-finetuned-rte | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-large-v2-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5487364620938628
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2-finetuned-rte
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6827
- Accuracy: 0.5487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 18 | 0.6954 | 0.5271 |
| No log | 2.0 | 36 | 0.6860 | 0.5379 |
| No log | 3.0 | 54 | 0.6827 | 0.5487 |
| No log | 4.0 | 72 | 0.7179 | 0.5235 |
| No log | 5.0 | 90 | 0.7504 | 0.5379 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
| 1,833 |
boronbrown48/topic_otherTopics_v2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
boychaboy/kobias_klue-roberta-base | [
"biased",
"none"
] | Entry not found | 15 |
caioamb/bert-base-uncased-finetuned-md-simpletransformers | null | Entry not found | 15 |
chitra/finetune-paraphrase-model | null | ---
tags:
- generated_from_trainer
model-index:
- name: finetune-paraphrase-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-paraphrase-model
This model is a fine-tuned version of [coderpotter/adversarial-paraphrasing-detector](https://huggingface.co/coderpotter/adversarial-paraphrasing-detector) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.1 | 200 | 3.0116 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,237 |
cvcio/mediawatch-el-topics | [
"AFFAIRS",
"AGRICULTURE",
"ARTS_AND_CULTURE",
"BREAKING_NEWS",
"BUSINESS",
"COVID",
"CRIME",
"ECONOMY",
"EDUCATION",
"ELECTIONS",
"ENTERTAINMENT",
"ENVIRONMENT",
"FOOD",
"HEALTH",
"INTERNATIONAL",
"JUSTICE",
"LAW_AND_ORDER",
"MILITARY",
"NON_PAPER",
"OPINION",
"POLITICS",
"... | ---
language: el
license: gpl-3.0
tags:
- roberta
- Greek
- news
- transformers
- text-classification
pipeline_tag: text-classification
model-index:
- name: mediawatch-el-topics
results:
- task:
type: text-classification
name: Multi Label Text Classification
metrics:
- type: roc_auc
value: 98.55
name: ROCAUC
- type: eval_AFFAIRS
value: 98.72
name: AFFAIRS
- type: eval_AGRICULTURE
value: 97.99
name: AGRICULTURE
- type: eval_ARTS_AND_CULTURE
value: 98.38
name: ARTS_AND_CULTURE
- type: eval_BREAKING_NEWS
value: 96.75
name: BREAKING_NEWS
- type: eval_BUSINESS
value: 98.11
name: BUSINESS
- type: eval_COVID
value: 96.2
name: COVID
- type: eval_CRIME
value: 98.85
name: CRIME
- type: eval_ECONOMY
value: 97.65
name: ECONOMY
- type: eval_EDUCATION
value: 98.65
name: EDUCATION
- type: eval_ELECTIONS
value: 99.4
name: ELECTIONS
- type: eval_ENTERTAINMENT
value: 99.25
name: ENTERTAINMENT
- type: eval_ENVIRONMENT
value: 98.47
name: ENVIRONMENT
- type: eval_FOOD
value: 99.34
name: FOOD
- type: eval_HEALTH
value: 97.23
name: HEALTH
- type: eval_INTERNATIONAL
value: 96.24
name: INTERNATIONAL
- type: eval_JUSTICE
value: 98.62
name: JUSTICE
- type: eval_LAW_AND_ORDER
value: 91.77
name: LAW_AND_ORDER
- type: eval_MILITARY
value: 98.38
name: MILITARY
- type: eval_NON_PAPER
value: 95.95
name: NON_PAPER
- type: eval_OPINION
value: 96.24
name: OPINION
- type: eval_POLITICS
value: 97.73
name: POLITICS
- type: eval_REFUGEE
value: 99.49
name: REFUGEE
- type: eval_REGIONAL
value: 95.2
name: REGIONAL
- type: eval_RELIGION
value: 99.22
name: RELIGION
- type: eval_SCIENCE
value: 98.37
name: SCIENCE
- type: eval_SOCIAL_MEDIA
value: 99.1
name: SOCIAL_MEDIA
- type: eval_SOCIETY
value: 94.39
name: SOCIETY
- type: eval_SPORTS
value: 99.39
name: SPORTS
- type: eval_TECH
value: 99.23
name: TECH
- type: eval_TOURISM
value: 99.0
name: TOURISM
- type: eval_TRANSPORT
value: 98.79
name: TRANSPORT
- type: eval_TRAVEL
value: 98.32
name: TRAVEL
- type: eval_WEATHER
value: 99.5
name: WEATHER
widget:
- text: "Παρ’ ολίγον «θερμό» επεισόδιο τουρκικού πολεμικού πλοίου με ελληνικό ωκεανογραφικό στην περιοχή μεταξύ Ρόδου και Καστελόριζου, στο διάστημα 20-23 Σεπτεμβρίου, αποκάλυψε το ΟΡΕΝ. Σύμφωνα με πληροφορίες που μετέδωσε το κεντρικό δελτίο ειδήσεων, όταν το ελληνικό ερευνητικό « ΑΙΓΑΙΟ » που ανήκει στο Ελληνικό Κέντρο Θαλασσίων Ερευνών βγήκε έξω από τα 6 ν.μ, σε διεθνή ύδατα, το προσέγγισε τουρκικό πολεμικό πλοίο, ο κυβερνήτης του οποίου ζήτησε δύο φορές μέσω ασυρμάτου να ενημερωθεί για τα στοιχεία του πλοίου, αλλά και για την αποστολή του. Ο πλοίαρχος του ελληνικού ερευνητικού δεν απάντησε και τελικά το τουρκικό πολεμικό απομακρύνθηκε."
example_title: Topic AFFAIRS
- text: "Η κυβερνητική ανικανότητα οδηγεί την χώρα στο χάος. Η κυβερνηση Μητσοτακη αδυνατεί να διαχειριστεί την πανδημία. Δεν μπορει ούτε να πείσει τον κόσμο να εμβολιαστεί, που ήταν το πιο απλο πράγμα. Σημερα λοιπόν φτάσαμε στο σημείο να μιλάμε για επαναφορά της χρήσης μάσκας σε εξωτερικούς χώρους ακόμη και όπου δεν υπάρχει συγχρωτισμός. Στις συζητήσεις των ειδικών θα βρεθεί επίσης το ενδεχόμενο για τοπικά lockdown σε περιοχές με βαρύ ιικό φορτίο για να μην ξεφύγει η κατάσταση, ενώ θα χρειάζεται κάποιος για τις μετακινήσεις του είτε πιστοποιητικό εμβολιασμού ή νόσησης και οι ανεμβολίαστοι rapid ή μοριακό τεστ."
example_title: Topic COVID
- text: "Η «ωραία Ελένη» επέστρεψε στην τηλεόραση, μέσα από τη συχνότητα του MEGA και άφησε τις καλύτερες εντυπώσεις. Το πλατό από το οποίο εμφανίζεται η Ελένη Μενεγάκη έχει φτιαχτεί από την αρχή για την εκπομπή της. Σήμερα, στο κλείσιμο της εκπομπής η Ελένη πέρασε ανάμεσα από τις κάμερες για να μπει στο καμαρίνι της «Μην τρομοκρατείστε, είμαι η Ελένη Μενεγάκη, τα κάνω αυτά. Με συγχωρείται, έχω ψυχολογικά αν δεν είμαι ελεύθερη» είπε αρχικά η παρουσιάστρια στους συνεργάτες της και πρόσθεσε στη συνέχεια: «Η Ελένη ολοκλήρωσε. Μπορείτε να συνεχίσετε με το υπόλοιπο πρόγραμμα του Mega. Εγώ ανοίγω το καμαρίνι, αν με αφήσουν. Μπαίνω καμαρίνι». Δείτε το απόσπασμα!"
example_title: Topic ENTERTAINMENT
- text: "Ένα εξαιρετικά ενδιαφέρον «κουτσομπολιό» εντόπισαν οι κεραίες της στήλης πέριξ του Μεγάρου Μαξίμου : το κατά πόσον, δηλαδή, ο «εξ απορρήτων» του Κυριάκου Μητσοτάκη , Γιώργος Γεραπετρίτης μετέχει στη διαχείριση της πανδημίας και στην διαδικασία λήψης αποφάσεων. Το εν λόγω «κουτσομπολιό» πυροδότησε το γεγονός ότι σε σαββατιάτικη εφημερίδα δημοσιεύθηκαν προχθές δηλώσεις του υπουργού Επικρατείας με τις οποίες απέκλειε κάθε σενάριο νέων οριζόντιων μέτρων και την ίδια ώρα, το Μαξίμου ανήγγελλε… καραντίνα στη Μύκονο. «Είναι αυτονόητο ότι η κοινωνία και η οικονομία δεν αντέχουν οριζόντιους περιορισμούς», έλεγε χαρακτηριστικά ο Γεραπετρίτης, την ώρα που η κυβέρνηση ανακοίνωνε… αυτούς τους οριζόντιους περιορισμούς. Ως εκ τούτων, δύο τινά μπορεί να συμβαίνουν: είτε ο υπουργός Επικρατείας δεν μετέχει πλέον στη λήψη των αποφάσεων, είτε η απόφαση για οριζόντια μέτρα ελήφθη υπό το κράτος πανικού το πρωί του Σαββάτου, όταν έφτασε στο Μαξίμου η τελευταία «φουρνιά» των επιδημιολογικών δεδομένων για το νησί των ανέμων…"
example_title: Topic NON_PAPER
- text: "Είναι ξεκάθαρο ότι μετά το πλήγμα που δέχθηκε η κυβέρνησή του από τις αδυναμίες στην αντιμετώπιση των καταστροφικών πυρκαγιών το μεγάλο στοίχημα για τον Κυριάκο Μητσοτάκη είναι να προχωρήσει συντεταγμένα και χωρίς παρατράγουδα ο σχεδιασμός για την αποκατάσταση των ζημιών. Ο Πρωθυπουργός έχει ήδη φτιάξει μια ομάδα κρούσης την οποία αποτελούν 9 υπουργοί. Τα μέλη που απαρτίζουν την ομάδα κρούσης και τα οποία βρίσκονται σε συνεχή, καθημερινή επαφή με τον Κυριάκο Μητσοτάκη είναι, όπως μας πληροφορεί η στήλη «Θεωρείο» της «Καθημερινής» είναι οι: Γ. Γεραπετρίτης, Α. Σκέρτσος, Χρ. Τριαντόπουλος, Κ. Καραμανλής, Κ. Σκρέκας, Στ. Πέτσας, Σπ. Λιβανός και φυσικά οι Χρ. Σταικούρας και Θ. Σκυλακάκης."
example_title: Topic OPINION
---
**Disclaimer**: *This model is still under testing and may change in the future, we will try to keep backwards compatibility. For any questions reach us at info@cvcio.org*
# MediaWatch News Topics (Greek)
Fine-tuned model for multi-label text-classification (SequenceClassification), based on [roberta-el-news](https://huggingface.co/cvcio/roberta-el-news), using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is to classify news in real-time on upto 33 topics including: *AFFAIRS*, *AGRICULTURE*, *ARTS_AND_CULTURE*, *BREAKING_NEWS*, *BUSINESS*, *COVID*, *ECONOMY*, *EDUCATION*, *ELECTIONS*, *ENTERTAINMENT*, *ENVIRONMENT*, *FOOD*, *HEALTH*, *INTERNATIONAL*, *LAW_AND_ORDER*, *MILITARY*, *NON_PAPER*, *OPINION*, *POLITICS*, *REFUGEE*, *REGIONAL*, *RELIGION*, *SCIENCE*, *SOCIAL_MEDIA*, *SOCIETY*, *SPORTS*, *TECH*, *TOURISM*, *TRANSPORT*, *TRAVEL*, *WEATHER*, *CRIME*, *JUSTICE*.
## How to use
You can use this model directly with a pipeline for text-classification:
```python
from transformers import pipeline
pipe = pipeline(
task="text-classification",
model="cvcio/mediawatch-el-topics",
tokenizer="cvcio/roberta-el-news" # or cvcio/mediawatch-el-topics
)
topics = pipe(
"Η βιασύνη αρκετών χωρών να άρουν τους περιορισμούς κατά του κορονοϊού, "+
"αν όχι να κηρύξουν το τέλος της πανδημίας, με το σκεπτικό ότι έφτασε "+
"πλέον η ώρα να συμβιώσουμε με την Covid-19, έχει κάνει μερικούς πιο "+
"επιφυλακτικούς επιστήμονες να προειδοποιούν ότι πρόκειται μάλλον "+
"για «ενδημική αυταπάτη» και ότι είναι πρόωρη τέτοια υπερβολική "+
"χαλάρωση. Καθώς τα κρούσματα της Covid-19, μετά το αιφνιδιαστικό "+
"μαζικό κύμα της παραλλαγής Όμικρον, εμφανίζουν τάση υποχώρησης σε "+
"Ευρώπη και Βόρεια Αμερική, όπου περισσεύει η κόπωση μεταξύ των "+
"πολιτών μετά από δύο χρόνια πανδημίας, ειδικοί και μη αδημονούν να "+
"«ξεμπερδέψουν» με τον κορονοϊό.",
padding=True,
truncation=True,
max_length=512,
return_all_scores=True
)
print(topics)
# outputs
[
[
{'label': 'AFFAIRS', 'score': 0.0018806682201102376},
{'label': 'AGRICULTURE', 'score': 0.00014653144171461463},
{'label': 'ARTS_AND_CULTURE', 'score': 0.0012948638759553432},
{'label': 'BREAKING_NEWS', 'score': 0.0001729220530251041},
{'label': 'BUSINESS', 'score': 0.0028276608791202307},
{'label': 'COVID', 'score': 0.4407998025417328},
{'label': 'ECONOMY', 'score': 0.039826102554798126},
{'label': 'EDUCATION', 'score': 0.0019098613411188126},
{'label': 'ELECTIONS', 'score': 0.0003333651984576136},
{'label': 'ENTERTAINMENT', 'score': 0.004249618388712406},
{'label': 'ENVIRONMENT', 'score': 0.0015828514005988836},
{'label': 'FOOD', 'score': 0.0018390495097264647},
{'label': 'HEALTH', 'score': 0.1204477995634079},
{'label': 'INTERNATIONAL', 'score': 0.25892165303230286},
{'label': 'LAW_AND_ORDER', 'score': 0.07646272331476212},
{'label': 'MILITARY', 'score': 0.00033025629818439484},
{'label': 'NON_PAPER', 'score': 0.011991199105978012},
{'label': 'OPINION', 'score': 0.16166265308856964},
{'label': 'POLITICS', 'score': 0.0008890336030162871},
{'label': 'REFUGEE', 'score': 0.0011504743015393615},
{'label': 'REGIONAL', 'score': 0.0008734092116355896},
{'label': 'RELIGION', 'score': 0.0009001944563351572},
{'label': 'SCIENCE', 'score': 0.05075162276625633},
{'label': 'SOCIAL_MEDIA', 'score': 0.00039615994319319725},
{'label': 'SOCIETY', 'score': 0.0043518817983567715},
{'label': 'SPORTS', 'score': 0.002416545059531927},
{'label': 'TECH', 'score': 0.0007818648009561002},
{'label': 'TOURISM', 'score': 0.011870541609823704},
{'label': 'TRANSPORT', 'score': 0.0009422845905646682},
{'label': 'TRAVEL', 'score': 0.03004464879631996},
{'label': 'WEATHER', 'score': 0.00040286066359840333},
{'label': 'CRIME', 'score': 0.0005416403291746974},
{'label': 'JUSTICE', 'score': 0.000990519649349153}
]
]
```
## Labels
All labels, except *NON_PAPER*, retrieved by source articles during the data collection step, without any preprocessing, assuming that journalists and newsrooms assign correct tags to the articles. We disregarded all articles with more than 6 tags to reduce bias and tag manipulation.
| label | roc_auc | samples |
|-------:|--------:|--------:|
| AFFAIRS | 0.9872 | 6,314 |
| AGRICULTURE | 0.9799 | 1,254 |
| ARTS_AND_CULTURE | 0.9838 | 15,968 |
| BREAKING_NEWS | 0.9675 | 827 |
| BUSINESS | 0.9811 | 6,507 |
| COVID | 0.9620 | 50,000 |
| CRIME | 0.9885 | 34,421 |
| ECONOMY | 0.9765 | 45,474 |
| EDUCATION | 0.9865 | 10,111 |
| ELECTIONS | 0.9940 | 7,571 |
| ENTERTAINMENT | 0.9925 | 23,323 |
| ENVIRONMENT | 0.9847 | 23,060 |
| FOOD | 0.9934 | 3,712 |
| HEALTH | 0.9723 | 16,852 |
| INTERNATIONAL | 0.9624 | 50,000 |
| JUSTICE | 0.9862 | 4,860 |
| LAW_AND_ORDER | 0.9177 | 50,000 |
| MILITARY | 0.9838 | 6,536 |
| NON_PAPER | 0.9595 | 4,589 |
| OPINION | 0.9624 | 6,296 |
| POLITICS | 0.9773 | 50,000 |
| REFUGEE | 0.9949 | 4,536 |
| REGIONAL | 0.9520 | 50,000 |
| RELIGION | 0.9922 | 11,533 |
| SCIENCE | 0.9837 | 1,998 |
| SOCIAL_MEDIA | 0.991 | 6,212 |
| SOCIETY | 0.9439 | 50,000 |
| SPORTS | 0.9939 | 31,396 |
| TECH | 0.9923 | 8,225 |
| TOURISM | 0.9900 | 8,081 |
| TRANSPORT | 0.9879 | 3,211 |
| TRAVEL | 0.9832 | 4,638 |
| WEATHER | 0.9950 | 19,931 |
| loss | 0.0533 | - |
| roc_auc | 0.9855 | - |
## Pretraining
The model was pretrained using an NVIDIA A10 GPU for 15 epochs (~ approx 59K steps, 8 hours training) with a batch size of 128. The optimizer used is Adam with a learning rate of 1e-5, and weight decay 0.01. We used roc_auc_micro to evaluate the results.
### Framework versions
- Transformers 4.13.0
- Pytorch 1.9.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
## Authors
Dimitris Papaevagelou - [@andefined](https://github.com/andefined)
## About Us
[Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest. | 12,750 |
emrecan/bert-base-multilingual-cased-snli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
| 332 |
jinmang2/textcnn-ko-dialect-classifier | [
"dialect",
"standard"
] | Entry not found | 15 |
nepp1d0/ChemBERTa_drug_state_classification | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LA... | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ChemBERTa_drug_state_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ChemBERTa_drug_state_classification
This model is a fine-tuned version of [nepp1d0/ChemBERTa_drug_state_classification](https://huggingface.co/nepp1d0/ChemBERTa_drug_state_classification) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0463
- Accuracy: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5063 | 1.0 | 240 | 0.3069 | 0.9160 |
| 0.3683 | 2.0 | 480 | 0.2135 | 0.9431 |
| 0.2633 | 3.0 | 720 | 0.1324 | 0.9577 |
| 0.1692 | 4.0 | 960 | 0.0647 | 0.9802 |
| 0.1109 | 5.0 | 1200 | 0.0463 | 0.9870 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
| 1,682 |
sismetanin/xlm_roberta_large-ru-sentiment-rureviews | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## XLM-RoBERTa-Large-ru-sentiment-RuReviews
XLM-RoBERTa-Large-ru-sentiment-RuReviews is a [XLM-RoBERTa-Large](https://huggingface.co/xlm-roberta-large) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
``` | 6,362 |
textattack/distilbert-base-uncased-WNLI | null | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 128, a learning
rate of 2e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5633802816901409, as measured by the
eval set accuracy, found after 0 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 629 |
danhsf/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.926557813198531
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9265
- F1: 0.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8631 | 1.0 | 250 | 0.3221 | 0.904 | 0.9011 |
| 0.254 | 2.0 | 500 | 0.2201 | 0.9265 | 0.9266 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,806 |
hackathon-pln-es/Detect-Acoso-Twitter-Es | [
"No acoso",
"acoso"
] | ---
license: apache-2.0
language: "es"
tags:
- generated_from_trainer
- es
- text-classification
- acoso
- twitter
- cyberbullying
datasets:
- hackathon-pln-es/Dataset-Acoso-Twitter-Es
widget:
- text: "Que horrible como la farándula chilena siempre se encargaba de dejar mal a las mujeres. Un asco"
- text: "Hay que ser bien menestra para amenazar a una mujer con una llave de ruedas. Viendo como se viste no me queda ninguna duda"
- text: "más centrados en tener una sociedad reprimida y sumisa que en estudiar y elaborar políticas de protección hacia las personas de mayor riesgo ante el virus."
metrics:
- accuracy
model-index:
- name: Detección de acoso en Twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Detección de acoso en Twitter Español
This model is a fine-tuned version of [mrm8488/distilroberta-finetuned-tweets-hate-speech](https://huggingface.co/mrm8488/distilroberta-finetuned-tweets-hate-speech) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1628
- Accuracy: 0.9167
# UNL: Universidad Nacional de Loja
## Miembros del equipo:
- Anderson Quizhpe <br>
- Luis Negrón <br>
- David Pacheco <br>
- Bryan Requenes <br>
- Paul Pasaca
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6732 | 1.0 | 27 | 0.3797 | 0.875 |
| 0.5537 | 2.0 | 54 | 0.3242 | 0.9167 |
| 0.5218 | 3.0 | 81 | 0.2879 | 0.9167 |
| 0.509 | 4.0 | 108 | 0.2606 | 0.9167 |
| 0.4196 | 5.0 | 135 | 0.1628 | 0.9167 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 2,414 |
anwarvic/distilbert-base-uncased-for-fakenews | [
"LABEL_0"
] | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT (uncased) for FaceNews Classification
This model is a classification model built by fine-tuning
[DistilBERT base model](https://huggingface.co/distilbert-base-uncased).
This model was trained using
[fake-and-real-news-dataset](https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset)
for five epochs.
> **NOTE:**
This model is just a POC (proof-of-concept) for a fellowship I was applying for.
## Intended uses & limitations
Note that this model is primarily aimed at classifying an article to either
"Fake" or "Real".
### How to use
Check this [notebook](https://www.kaggle.com/code/mohamedanwarvic/fakenewsclassifier-fatima-fellowship) on Kaggle. | 770 |
JNK789/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9305
- name: F1
type: f1
value: 0.9307950942842982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1712
- Accuracy: 0.9305
- F1: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7721 | 1.0 | 250 | 0.2778 | 0.9145 | 0.9131 |
| 0.2103 | 2.0 | 500 | 0.1818 | 0.925 | 0.9249 |
| 0.1446 | 3.0 | 750 | 0.1712 | 0.9305 | 0.9308 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,877 |
hackathon-pln-es/bertin-roberta-base-zeroshot-esnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
pipeline_tag: zero-shot-classification
tags:
- zero-shot-classification
- nli
language:
- es
datasets:
- hackathon-pln-es/nli-es
widget:
- text: "Para detener la pandemia, es importante que todos se presenten a vacunarse."
candidate_labels: "salud, deporte, entretenimiento"
---
# A zero-shot classifier based on bertin-roberta-base-spanish
This model was trained on the basis of the model `bertin-roberta-base-spanish` using **Cross encoder** for NLI task. A CrossEncoder takes a sentence pair as input and outputs a label so it learns to predict the labels: "contradiction": 0, "entailment": 1, "neutral": 2.
You can use it with Hugging Face's Zero-shot pipeline to make **zero-shot classifications**. Given a sentence and an arbitrary set of labels/topics, it will output the likelihood of the sentence belonging to each of the topic.
## Usage (HuggingFace Transformers)
The simplest way to use the model is the huggingface transformers pipeline tool. Just initialize the pipeline specifying the task as "zero-shot-classification" and select "hackathon-pln-es/bertin-roberta-base-zeroshot-esnli" as model.
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="hackathon-pln-es/bertin-roberta-base-zeroshot-esnli")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Esta oración es sobre {}."
)
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Training
We used [sentence-transformers](https://www.SBERT.net) to train the model.
**Dataset**
We used a collection of datasets of Natural Language Inference as training data:
- [ESXNLI](https://raw.githubusercontent.com/artetxem/esxnli/master/esxnli.tsv), only the part in spanish
- [SNLI](https://nlp.stanford.edu/projects/snli/), automatically translated
- [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/), automatically translated
The whole dataset used is available [here](https://huggingface.co/datasets/hackathon-pln-es/nli-es).
## Authors
- [Anibal Pérez](https://huggingface.co/Anarpego)
- [Emilio Tomás Ariza](https://huggingface.co/medardodt)
- [Lautaro Gesuelli Pinto](https://huggingface.co/Lautaro)
- [Mauricio Mazuecos](https://huggingface.co/mmazuecos)
| 2,537 |
Graphcore/hubert-base-superb-ks | [
"_silence_",
"_unknown_",
"down",
"go",
"left",
"no",
"off",
"on",
"right",
"stop",
"up",
"yes"
] | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: hubert-base-superb-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-superb-ks
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0848
- Accuracy: 0.9822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,395 |
daveni/aesthetic_attribute_classifier | [
"color_lighting",
"composition",
"depth_of_field",
"focus",
"general_impression",
"subject_of_photo",
"use_of_camera"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: aesthetic_attribute_classifier
results: []
widget:
- text: Check your vertical on the main support; it looks a little off. I'd also like to see how it looks with a bit of the sky cropped from the photo
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aesthetic_attribute_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [PCCD dataset](https://github.com/ivclab/DeepPhotoCritic-ICCV17).
It achieves the following results on the evaluation set:
- Loss: 0.3976
- Precision: {'precision': 0.877129341279301}
- Recall: {'recall': 0.8751381215469614}
- F1: {'f1': 0.875529982855803}
- Accuracy: {'accuracy': 0.8751381215469614}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|
| 0.452 | 1.0 | 1528 | 0.4109 | {'precision': 0.8632779077963935} | {'recall': 0.8615101289134438} | {'f1': 0.8618616182904953} | {'accuracy': 0.8615101289134438} |
| 0.3099 | 2.0 | 3056 | 0.3976 | {'precision': 0.877129341279301} | {'recall': 0.8751381215469614} | {'f1': 0.875529982855803} | {'accuracy': 0.8751381215469614} |
| 0.227 | 3.0 | 4584 | 0.4320 | {'precision': 0.876211408446225} | {'recall': 0.874401473296501} | {'f1': 0.8747427955387239} | {'accuracy': 0.874401473296501} |
| 0.1645 | 4.0 | 6112 | 0.4840 | {'precision': 0.8724641667216837} | {'recall': 0.8714548802946593} | {'f1': 0.8714577820909117} | {'accuracy': 0.8714548802946593} |
| 0.1141 | 5.0 | 7640 | 0.5083 | {'precision': 0.8755445355051571} | {'recall': 0.8747697974217311} | {'f1': 0.8748766125899489} | {'accuracy': 0.8747697974217311} |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
| 2,850 |
Toshifumi/distilbert-base-multilingual-cased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8885
- name: F1
type: f1
value: 0.8888307905223247
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3702
- Accuracy: 0.8885
- F1: 0.8888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1646 | 1.0 | 250 | 0.6190 | 0.8085 | 0.7992 |
| 0.4536 | 2.0 | 500 | 0.3702 | 0.8885 | 0.8888 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,851 |
SeNSiTivE/Learning-sentiment-analysis-through-imdb-ds | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: Learning-sentiment-analysis-through-imdb-ds
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8817891373801918
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Learning-sentiment-analysis-through-imdb-ds
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3419
- Accuracy: 0.8767
- F1: 0.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,529 |
Intel/bert-base-uncased-mrpc-int8-static | [
"0",
"1"
] | ---
language: en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- mrpc
metrics:
- f1
---
# INT8 BERT base uncased finetuned MRPC
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Intel/bert-base-uncased-mrpc](https://huggingface.co/Intel/bert-base-uncased-mrpc).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304.
The linear module **bert.encoder.layer.9.output.dense, bert.encoder.layer.10.output.dense** falls back to fp32 to meet the 1% relative accuracy loss.
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8997|0.9042|
| **Model size (MB)** |120|418|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/bert-base-uncased-mrpc-int8-static',
)
```
| 1,163 |
tristantristantristan/rumor | [
"0",
"1"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- tristantristantristan/autotrain-data-rumour_detection
co2_eq_emissions: 0.056186258092819436
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 813825547
- CO2 Emissions (in grams): 0.056186258092819436
## Validation Metrics
- Loss: 0.15057753026485443
- Accuracy: 0.9738805970149254
- Precision: 0.9469026548672567
- Recall: 0.9304347826086956
- AUC: 0.9891149437157905
- F1: 0.9385964912280702
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/tristantristantristan/autotrain-rumour_detection-813825547
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("tristantristantristan/autotrain-rumour_detection-813825547", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("tristantristantristan/autotrain-rumour_detection-813825547", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,262 |
KoenBronstring/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8758169934640523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3149
- Accuracy: 0.8733
- F1: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,519 |
binay1999/bert-for-text-classification | null | Entry not found | 15 |
moma1820/cluster4 | null | Entry not found | 15 |
Nithiwat/soda-berta | null | Entry not found | 15 |
classla/bcms-bertic-parlasent-bcs-ter | [
"Negative",
"Neutral",
"Positive"
] | ---
language: "hr"
tags:
- text-classification
- sentiment-analysis
widget:
- text: "Poštovani potpredsjedničke Vlade i ministre hrvatskih branitelja, mislite li da ste zapravo iznevjerili svoje suborce s kojima ste 555 dana prosvjedovali u šatoru protiv tadašnjih dužnosnika jer ste zapravo donijeli zakon koji je neprovediv, a birali ste si suradnike koji nemaju etički integritet."
---
# bcms-bertic-parlasent-bcs-ter
Ternary text classification model based on [`classla/bcms-bertic`](https://huggingface.co/classla/bcms-bertic) and fine-tuned on the BCS Political Sentiment dataset (sentence-level data).
This classifier classifies text into only three categories: Negative, Neutral, and Positive. For the binary classifier (Negative, Other) check [this model](https://huggingface.co/classla/bcms-bertic-parlasent-bcs-bi ).
For details on the dataset and the finetuning procedure, please see [this paper](https://arxiv.org/abs/2206.00929).
## Fine-tuning hyperparameters
Fine-tuning was performed with `simpletransformers`. Beforehand a brief sweep for the optimal number of epochs was performed and the presumed best value was 9. Other arguments were kept default.
```python
model_args = {
"num_train_epochs": 9
}
```
## Performance
The same pipeline was run with two other transformer models and `fasttext` for comparison. Macro F1 scores were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
| model | average macro F1 |
|---------------------------------|--------------------|
| bcms-bertic-parlasent-bcs-ter | 0.7941 ± 0.0101 ** |
| EMBEDDIA/crosloengual-bert | 0.7709 ± 0.0113 |
| xlm-roberta-base | 0.7184 ± 0.0139 |
| fasttext + CLARIN.si embeddings | 0.6312 ± 0.0043 |
Two best performing models have been compared with the Mann-Whitney U test to calculate p-values (** denotes p<0.01).
## Use example with `simpletransformers==0.63.7`
```python
from simpletransformers.classification import ClassificationModel
model = ClassificationModel("electra", "classla/bcms-bertic-parlasent-bcs-ter")
predictions, logits = model.predict([
"Vi niste normalni",
"Đački autobusi moraju da voze svaki dan",
"Ovo je najbolji zakon na svetu",
]
)
predictions
# Output: array([0, 1, 2])
[model.config.id2label[i] for i in predictions]
# Output: ['Negative', 'Neutral', 'Positive']
```
## Citation
If you use the model, please cite the following paper on which the original model is based:
```
@inproceedings{ljubesic-lauc-2021-bertic,
title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
pages = "37--42",
}
```
and the paper describing the dataset and methods for the current finetuning:
```
@misc{https://doi.org/10.48550/arxiv.2206.00929,
doi = {10.48550/ARXIV.2206.00929},
url = {https://arxiv.org/abs/2206.00929},
author = {Mochtak, Michal and Rupnik, Peter and Ljubešič, Nikola},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {The ParlaSent-BCS dataset of sentiment-annotated parliamentary debates from Bosnia-Herzegovina, Croatia, and Serbia},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
}
``` | 3,745 |
Jerimee/autotrain-dontknowwhatImdoing-980432459 | [
"Goblin",
"Mundane"
] | ---
tags: autotrain
language: en
widget:
- text: "Jerimee"
example_title: "a weird human name"
- text: "Curtastica"
example_title: "a goblin name"
- text: "Fatima"
example_title: "a common human name"
datasets:
- Jerimee/autotrain-data-dontknowwhatImdoing
co2_eq_emissions: 0.012147398577917884
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 980432459
- CO2 Emissions (in grams): 0.012147398577917884
## Validation Metrics
- Loss: 0.0469294898211956
- Accuracy: 0.9917355371900827
- Precision: 0.9936708860759493
- Recall: 0.9936708860759493
- AUC: 0.9990958408679927
- F1: 0.9936708860759493
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Jerimee/autotrain-dontknowwhatImdoing-980432459
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Jerimee/autotrain-dontknowwhatImdoing-980432459", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Jerimee/autotrain-dontknowwhatImdoing-980432459", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,354 |
domenicrosati/deberta-v3-large-finetuned-DAGPap22 | null | ---
license: mit
tags:
- text-classification
- generated_from_trainer
model-index:
- name: deberta-v3-large-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-DAGPap22
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,140 |
ychenNLP/arabic-relation-extraction | [
"ART",
"GEN-AFF",
"O",
"ORG-AFF",
"PART-WHOLE",
"PER-SOC",
"PHYS"
] | ---
tags:
- BERT
- Text Classification
- relation
language:
- ar
- en
license: mit
datasets:
- ACE2005
---
# Arabic Relation Extraction Model
- [Github repo](https://github.com/edchengg/GigaBERT)
- Relation Extraction model based on [GigaBERTv4](https://huggingface.co/lanwuwei/GigaBERT-v4-Arabic-and-English).
- Model detail: mark two entities in the sentence with special markers (e.g., ```XXXX <PER> entity1 </PER> XXXXXXX <ORG> entity2 </ORG> XXXXX```). Then we use the BERT [CLS] representation to make a prediction.
- ACE2005 Training data: Arabic
- [Relation tags](https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/arabic-relations-guidelines-v6.5.pdf) including: Physical, Part-whole, Personal-Social, ORG-Affiliation, Agent-Artifact, Gen-Affiliation
## Hyperparameters
- learning_rate=2e-5
- num_train_epochs=10
- weight_decay=0.01
## How to use
Workflow of a relation extraction model:
1. Input --> NER model --> Entities
2. Input sentence + Entity 1 + Entity 2 --> Relation Classification Model --> Relation Type
```python
>>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer, AuotoModelForSequenceClassification
>>> ner_model = AutoModelForTokenClassification.from_pretrained("ychenNLP/arabic-ner-ace")
>>> ner_tokenizer = AutoTokenizer.from_pretrained("ychenNLP/arabic-ner-ace")
>>> ner_pip = pipeline("ner", model=ner_model, tokenizer=ner_tokenizer, grouped_entities=True)
>>> re_model = AutoModelForSequenceClassification.from_pretrained("ychenNLP/arabic-relation-extraction")
>>> re_tokenizer = AutoTokenizer.from_pretrained("ychenNLP/arabic-relation-extraction")
>>> re_pip = pipeline("text-classification", model=re_model, tokenizer=re_tokenizer)
def process_ner_output(entity_mention, inputs):
re_input = []
for idx1 in range(len(entity_mention) - 1):
for idx2 in range(idx1 + 1, len(entity_mention)):
ent_1 = entity_mention[idx1]
ent_2 = entity_mention[idx2]
ent_1_type = ent_1['entity_group']
ent_2_type = ent_2['entity_group']
ent_1_s = ent_1['start']
ent_1_e = ent_1['end']
ent_2_s = ent_2['start']
ent_2_e = ent_2['end']
new_re_input = ""
for c_idx, c in enumerate(inputs):
if c_idx == ent_1_s:
new_re_input += "<{}>".format(ent_1_type)
elif c_idx == ent_1_e:
new_re_input += "</{}>".format(ent_1_type)
elif c_idx == ent_2_s:
new_re_input += "<{}>".format(ent_2_type)
elif c_idx == ent_2_e:
new_re_input += "</{}>".format(ent_2_type)
new_re_input += c
re_input.append({"re_input": new_re_input, "arg1": ent_1, "arg2": ent_2, "input": inputs})
return re_input
def post_process_re_output(re_output, text_input, ner_output):
final_output = []
for idx, out in enumerate(re_output):
if out["label"] != 'O':
tmp = re_input[idx]
tmp['relation_type'] = out
tmp.pop('re_input', None)
final_output.append(tmp)
template = {"input": text_input,
"entity": ner_output,
"relation": final_output}
return template
text_input = """ويتزامن ذلك مع اجتماع بايدن مع قادة الدول الأعضاء في الناتو في قمة موسعة في العاصمة الإسبانية، مدريد."""
ner_output = ner_pip(text_input) # inference NER tags
re_input = process_ner_output(ner_output, text_input) # prepare a pair of entity and predict relation type
re_output = []
for idx in range(len(re_input)):
tmp_re_output = re_pip(re_input[idx]["re_input"]) # for each pair of entity, predict relation
re_output.append(tmp_re_output[0])
re_ner_output = post_process_re_output(re_output, text_input, ner_output) # post process NER and relation predictions
print("Sentence: ",re_ner_output["input"])
print('====Entity====')
for ent in re_ner_output["entity"]:
print('{}--{}'.format(ent["word"], ent["entity_group"]))
print('====Relation====')
for rel in re_ner_output["relation"]:
print('{}--{}:{}'.format(rel['arg1']['word'], rel['arg2']['word'], rel['relation_type']['label']))
Sentence: ويتزامن ذلك مع اجتماع بايدن مع قادة الدول الأعضاء في الناتو في قمة موسعة في العاصمة الإسبانية، مدريد.
====Entity====
بايدن--PER
قادة--PER
الدول--GPE
الناتو--ORG
العاصمة--GPE
الاسبانية--GPE
مدريد--GPE
====Relation====
قادة--الدول:ORG-AFF
الدول--الناتو:ORG-AFF
العاصمة--الاسبانية:PART-WHOLE
```
### BibTeX entry and citation info
```bibtex
@inproceedings{lan2020gigabert,
author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan},
title = {Giga{BERT}: Zero-shot Transfer Learning from {E}nglish to {A}rabic},
booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)},
year = {2020}
}
```
| 4,902 |
alanwang8/default-longformer-base-4096-finetuned-cola | null | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: longformer-base-4096-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-base-4096-finetuned-cola
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7005
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.7005 | 0.0 |
| 0.6995 | 2.0 | 536 | 0.6960 | -0.0043 |
| 0.6995 | 3.0 | 804 | 0.6976 | -0.0057 |
| 0.6962 | 4.0 | 1072 | 0.6983 | -0.0123 |
| 0.6962 | 5.0 | 1340 | 0.6977 | -0.0529 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,965 |
zluvolyote/DEREXP | [
"LABEL_0"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DEREXP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DEREXP
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1590
- Mse: 3.1590
- Mae: 1.3397
- R2: 0.4465
- Accuracy: 0.2528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:--------:|
| 14.7557 | 0.01 | 500 | 4.3307 | 4.3307 | 1.6240 | 0.2411 | 0.1976 |
| 4.5754 | 0.02 | 1000 | 4.1273 | 4.1273 | 1.5719 | 0.2768 | 0.2084 |
| 4.2925 | 0.02 | 1500 | 4.3074 | 4.3074 | 1.6155 | 0.2452 | 0.2012 |
| 3.9816 | 0.03 | 2000 | 3.7767 | 3.7767 | 1.5008 | 0.3382 | 0.2134 |
| 3.9171 | 0.04 | 2500 | 3.7033 | 3.7033 | 1.4732 | 0.3511 | 0.2304 |
| 3.946 | 0.05 | 3000 | 3.6217 | 3.6217 | 1.4552 | 0.3654 | 0.2352 |
| 4.1 | 0.06 | 3500 | 3.6101 | 3.6101 | 1.4612 | 0.3674 | 0.2216 |
| 3.8535 | 0.06 | 4000 | 3.6160 | 3.6160 | 1.4576 | 0.3664 | 0.2294 |
| 3.9037 | 0.07 | 4500 | 3.5864 | 3.5864 | 1.4476 | 0.3716 | 0.2374 |
| 3.9358 | 0.08 | 5000 | 3.5087 | 3.5087 | 1.4237 | 0.3852 | 0.2414 |
| 3.8062 | 0.09 | 5500 | 3.6085 | 3.6085 | 1.4595 | 0.3677 | 0.2256 |
| 3.8802 | 0.1 | 6000 | 3.6371 | 3.6371 | 1.4615 | 0.3627 | 0.223 |
| 3.7239 | 0.1 | 6500 | 3.5191 | 3.5191 | 1.4278 | 0.3834 | 0.2324 |
| 3.7618 | 0.11 | 7000 | 3.8408 | 3.8408 | 1.4973 | 0.3270 | 0.2316 |
| 3.7217 | 0.12 | 7500 | 3.8241 | 3.8241 | 1.5046 | 0.3299 | 0.2236 |
| 3.8204 | 0.13 | 8000 | 3.5290 | 3.5290 | 1.4256 | 0.3816 | 0.2388 |
| 3.7211 | 0.14 | 8500 | 3.6903 | 3.6903 | 1.4674 | 0.3534 | 0.227 |
| 3.7243 | 0.14 | 9000 | 3.4718 | 3.4718 | 1.4201 | 0.3917 | 0.231 |
| 3.7713 | 0.15 | 9500 | 3.8970 | 3.8970 | 1.5304 | 0.3171 | 0.2092 |
| 3.6289 | 0.16 | 10000 | 3.5273 | 3.5273 | 1.4255 | 0.3819 | 0.2388 |
| 3.7516 | 0.17 | 10500 | 3.9020 | 3.9020 | 1.5230 | 0.3163 | 0.2138 |
| 3.7491 | 0.18 | 11000 | 3.4809 | 3.4809 | 1.4209 | 0.3901 | 0.2378 |
| 3.7809 | 0.18 | 11500 | 3.8779 | 3.8779 | 1.5087 | 0.3205 | 0.229 |
| 3.7163 | 0.19 | 12000 | 3.5177 | 3.5177 | 1.4330 | 0.3836 | 0.2298 |
| 3.732 | 0.2 | 12500 | 3.9986 | 3.9986 | 1.5401 | 0.2993 | 0.218 |
| 3.7381 | 0.21 | 13000 | 3.4782 | 3.4782 | 1.4277 | 0.3905 | 0.2302 |
| 3.7652 | 0.22 | 13500 | 3.6239 | 3.6239 | 1.4587 | 0.3650 | 0.2244 |
| 3.6003 | 0.22 | 14000 | 3.4873 | 3.4873 | 1.4288 | 0.3889 | 0.2316 |
| 3.6865 | 0.23 | 14500 | 3.5895 | 3.5895 | 1.4511 | 0.3710 | 0.23 |
| 3.7398 | 0.24 | 15000 | 3.8835 | 3.8835 | 1.5183 | 0.3195 | 0.2172 |
| 3.5939 | 0.25 | 15500 | 3.6334 | 3.6334 | 1.4643 | 0.3633 | 0.2256 |
| 3.691 | 0.26 | 16000 | 3.4251 | 3.4251 | 1.3994 | 0.3998 | 0.2488 |
| 3.7279 | 0.26 | 16500 | 3.3956 | 3.3956 | 1.4034 | 0.4050 | 0.2336 |
| 3.797 | 0.27 | 17000 | 3.4029 | 3.4029 | 1.3968 | 0.4037 | 0.2486 |
| 3.684 | 0.28 | 17500 | 3.5831 | 3.5831 | 1.4451 | 0.3721 | 0.2304 |
| 3.5894 | 0.29 | 18000 | 3.6120 | 3.6120 | 1.4492 | 0.3671 | 0.2338 |
| 3.5938 | 0.3 | 18500 | 3.4975 | 3.4975 | 1.4240 | 0.3871 | 0.231 |
| 3.4948 | 0.3 | 19000 | 3.4791 | 3.4791 | 1.4167 | 0.3904 | 0.24 |
| 3.6527 | 0.31 | 19500 | 3.3409 | 3.3409 | 1.3817 | 0.4146 | 0.2474 |
| 3.5545 | 0.32 | 20000 | 3.3412 | 3.3412 | 1.3860 | 0.4145 | 0.2466 |
| 3.6102 | 0.33 | 20500 | 3.4148 | 3.4148 | 1.3961 | 0.4016 | 0.2488 |
| 3.542 | 0.34 | 21000 | 3.5980 | 3.5980 | 1.4508 | 0.3695 | 0.2244 |
| 3.5081 | 0.34 | 21500 | 3.6310 | 3.6310 | 1.4488 | 0.3637 | 0.2372 |
| 3.7745 | 0.35 | 22000 | 3.5246 | 3.5246 | 1.4294 | 0.3824 | 0.2378 |
| 3.5048 | 0.36 | 22500 | 3.4395 | 3.4395 | 1.4126 | 0.3973 | 0.241 |
| 3.6374 | 0.37 | 23000 | 3.3863 | 3.3863 | 1.3928 | 0.4066 | 0.247 |
| 3.5231 | 0.38 | 23500 | 3.5991 | 3.5991 | 1.4468 | 0.3693 | 0.2348 |
| 3.5893 | 0.38 | 24000 | 3.2910 | 3.2910 | 1.3692 | 0.4233 | 0.2504 |
| 3.5051 | 0.39 | 24500 | 3.3765 | 3.3765 | 1.3953 | 0.4083 | 0.2394 |
| 3.6082 | 0.4 | 25000 | 3.3060 | 3.3060 | 1.3830 | 0.4207 | 0.2412 |
| 3.4009 | 0.41 | 25500 | 3.4448 | 3.4448 | 1.4095 | 0.3964 | 0.2404 |
| 3.4239 | 0.42 | 26000 | 3.4127 | 3.4127 | 1.4027 | 0.4020 | 0.2412 |
| 3.6036 | 0.42 | 26500 | 3.5339 | 3.5339 | 1.4405 | 0.3808 | 0.2266 |
| 3.4107 | 0.43 | 27000 | 3.3319 | 3.3319 | 1.3776 | 0.4162 | 0.2542 |
| 3.3903 | 0.44 | 27500 | 3.4434 | 3.4434 | 1.4072 | 0.3966 | 0.2486 |
| 3.5583 | 0.45 | 28000 | 3.3119 | 3.3119 | 1.3728 | 0.4197 | 0.2516 |
| 3.4701 | 0.46 | 28500 | 3.3733 | 3.3733 | 1.3910 | 0.4089 | 0.2494 |
| 3.4113 | 0.46 | 29000 | 3.4144 | 3.4144 | 1.4027 | 0.4017 | 0.2414 |
| 3.5731 | 0.47 | 29500 | 3.3822 | 3.3822 | 1.3911 | 0.4073 | 0.2428 |
| 3.5738 | 0.48 | 30000 | 3.4408 | 3.4408 | 1.4120 | 0.3971 | 0.2386 |
| 3.481 | 0.49 | 30500 | 3.3255 | 3.3255 | 1.3794 | 0.4173 | 0.2514 |
| 3.4716 | 0.5 | 31000 | 3.2817 | 3.2817 | 1.3703 | 0.4250 | 0.2492 |
| 3.5487 | 0.5 | 31500 | 3.3388 | 3.3388 | 1.3851 | 0.4149 | 0.2472 |
| 3.2559 | 0.51 | 32000 | 3.3552 | 3.3552 | 1.3847 | 0.4121 | 0.249 |
| 3.5715 | 0.52 | 32500 | 3.2896 | 3.2896 | 1.3692 | 0.4236 | 0.251 |
| 3.4085 | 0.53 | 33000 | 3.2690 | 3.2690 | 1.3685 | 0.4272 | 0.2522 |
| 3.5582 | 0.54 | 33500 | 3.3228 | 3.3228 | 1.3800 | 0.4178 | 0.2462 |
| 3.4105 | 0.54 | 34000 | 3.4462 | 3.4462 | 1.4089 | 0.3961 | 0.2474 |
| 3.5401 | 0.55 | 34500 | 3.3181 | 3.3181 | 1.3751 | 0.4186 | 0.2558 |
| 3.4213 | 0.56 | 35000 | 3.2455 | 3.2455 | 1.3592 | 0.4313 | 0.2548 |
| 3.4644 | 0.57 | 35500 | 3.3900 | 3.3900 | 1.4004 | 0.4060 | 0.2388 |
| 3.4277 | 0.58 | 36000 | 3.2150 | 3.2150 | 1.3506 | 0.4366 | 0.2558 |
| 3.3376 | 0.58 | 36500 | 3.3522 | 3.3522 | 1.3944 | 0.4126 | 0.24 |
| 3.4311 | 0.59 | 37000 | 3.4152 | 3.4152 | 1.3980 | 0.4016 | 0.2498 |
| 3.336 | 0.6 | 37500 | 3.2996 | 3.2996 | 1.3674 | 0.4218 | 0.2594 |
| 3.3557 | 0.61 | 38000 | 3.2040 | 3.2040 | 1.3499 | 0.4386 | 0.2486 |
| 3.3586 | 0.62 | 38500 | 3.2784 | 3.2784 | 1.3632 | 0.4255 | 0.2534 |
| 3.3187 | 0.62 | 39000 | 3.3466 | 3.3466 | 1.3832 | 0.4136 | 0.2468 |
| 3.3899 | 0.63 | 39500 | 3.3209 | 3.3209 | 1.3795 | 0.4181 | 0.25 |
| 3.4483 | 0.64 | 40000 | 3.4685 | 3.4685 | 1.4165 | 0.3922 | 0.2436 |
| 3.3463 | 0.65 | 40500 | 3.3874 | 3.3874 | 1.3961 | 0.4064 | 0.2448 |
| 3.373 | 0.66 | 41000 | 3.2243 | 3.2243 | 1.3518 | 0.4350 | 0.2562 |
| 3.4526 | 0.66 | 41500 | 3.2819 | 3.2819 | 1.3693 | 0.4249 | 0.253 |
| 3.3581 | 0.67 | 42000 | 3.3412 | 3.3412 | 1.3843 | 0.4145 | 0.2456 |
| 3.4551 | 0.68 | 42500 | 3.2484 | 3.2484 | 1.3594 | 0.4308 | 0.2574 |
| 3.4022 | 0.69 | 43000 | 3.2010 | 3.2010 | 1.3468 | 0.4391 | 0.2568 |
| 3.3281 | 0.7 | 43500 | 3.3184 | 3.3184 | 1.3764 | 0.4185 | 0.2476 |
| 3.4044 | 0.7 | 44000 | 3.2361 | 3.2361 | 1.3528 | 0.4329 | 0.2506 |
| 3.3427 | 0.71 | 44500 | 3.2269 | 3.2269 | 1.3557 | 0.4346 | 0.2492 |
| 3.4106 | 0.72 | 45000 | 3.2758 | 3.2758 | 1.3733 | 0.4260 | 0.2434 |
| 3.4406 | 0.73 | 45500 | 3.2235 | 3.2235 | 1.3548 | 0.4352 | 0.2526 |
| 3.491 | 0.74 | 46000 | 3.2842 | 3.2842 | 1.3688 | 0.4245 | 0.2496 |
| 3.4671 | 0.74 | 46500 | 3.1811 | 3.1811 | 1.3464 | 0.4426 | 0.249 |
| 3.5774 | 0.75 | 47000 | 3.2649 | 3.2649 | 1.3608 | 0.4279 | 0.251 |
| 3.4953 | 0.76 | 47500 | 3.2681 | 3.2681 | 1.3616 | 0.4273 | 0.2538 |
| 3.4212 | 0.77 | 48000 | 3.4407 | 3.4407 | 1.4088 | 0.3971 | 0.2424 |
| 3.3285 | 0.78 | 48500 | 3.3279 | 3.3279 | 1.3771 | 0.4169 | 0.2454 |
| 3.361 | 0.78 | 49000 | 3.3717 | 3.3717 | 1.3910 | 0.4092 | 0.243 |
| 3.5419 | 0.79 | 49500 | 3.2851 | 3.2851 | 1.3748 | 0.4244 | 0.2448 |
| 3.3979 | 0.8 | 50000 | 3.3991 | 3.3991 | 1.4039 | 0.4044 | 0.2378 |
| 3.3354 | 0.81 | 50500 | 3.2636 | 3.2636 | 1.3650 | 0.4281 | 0.2456 |
| 3.4488 | 0.82 | 51000 | 3.2604 | 3.2604 | 1.3695 | 0.4287 | 0.243 |
| 3.2583 | 0.82 | 51500 | 3.2759 | 3.2759 | 1.3759 | 0.4260 | 0.2442 |
| 3.3419 | 0.83 | 52000 | 3.2789 | 3.2789 | 1.3728 | 0.4254 | 0.2494 |
| 3.4243 | 0.84 | 52500 | 3.2993 | 3.2993 | 1.3772 | 0.4219 | 0.2486 |
| 3.3154 | 0.85 | 53000 | 3.2350 | 3.2350 | 1.3585 | 0.4331 | 0.2528 |
| 3.3462 | 0.86 | 53500 | 3.2361 | 3.2361 | 1.3594 | 0.4329 | 0.2516 |
| 3.4554 | 0.86 | 54000 | 3.2307 | 3.2307 | 1.3548 | 0.4339 | 0.2528 |
| 3.5053 | 0.87 | 54500 | 3.1970 | 3.1970 | 1.3494 | 0.4398 | 0.2526 |
| 3.2745 | 0.88 | 55000 | 3.2506 | 3.2506 | 1.3614 | 0.4304 | 0.2546 |
| 3.3788 | 0.89 | 55500 | 3.2090 | 3.2090 | 1.3540 | 0.4377 | 0.2516 |
| 3.3216 | 0.9 | 56000 | 3.3347 | 3.3347 | 1.3857 | 0.4157 | 0.2462 |
| 3.2991 | 0.9 | 56500 | 3.1590 | 3.1590 | 1.3397 | 0.4465 | 0.2528 |
| 3.175 | 0.91 | 57000 | 3.2950 | 3.2950 | 1.3734 | 0.4226 | 0.2534 |
| 3.4697 | 0.92 | 57500 | 3.2021 | 3.2021 | 1.3483 | 0.4389 | 0.255 |
| 3.2413 | 0.93 | 58000 | 3.2157 | 3.2157 | 1.3523 | 0.4365 | 0.2518 |
| 3.3949 | 0.94 | 58500 | 3.2709 | 3.2709 | 1.3678 | 0.4268 | 0.2494 |
| 3.3502 | 0.94 | 59000 | 3.2263 | 3.2263 | 1.3558 | 0.4347 | 0.253 |
| 3.3492 | 0.95 | 59500 | 3.2667 | 3.2667 | 1.3659 | 0.4276 | 0.2538 |
| 3.3568 | 0.96 | 60000 | 3.1717 | 3.1717 | 1.3410 | 0.4442 | 0.2542 |
| 3.3886 | 0.97 | 60500 | 3.1800 | 3.1800 | 1.3444 | 0.4428 | 0.2534 |
| 3.2994 | 0.98 | 61000 | 3.2166 | 3.2166 | 1.3539 | 0.4364 | 0.2498 |
| 3.3381 | 0.98 | 61500 | 3.1964 | 3.1964 | 1.3484 | 0.4399 | 0.2534 |
| 3.351 | 0.99 | 62000 | 3.1664 | 3.1664 | 1.3393 | 0.4452 | 0.2538 |
| 3.4063 | 1.0 | 62500 | 3.1764 | 3.1764 | 1.3421 | 0.4434 | 0.2542 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 12,601 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.