modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
Jeevesh8/512seq_len_6ep_bert_ft_cola-70 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-71 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-74 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-76 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-77 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-80 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-81 | null | Entry not found | 15 |
PriaPillai/distilbert-base-uncased-finetuned-query | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-query
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-query
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3668
- Accuracy: 0.8936
- F1: 0.8924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6511 | 1.0 | 30 | 0.5878 | 0.7234 | 0.6985 |
| 0.499 | 2.0 | 60 | 0.4520 | 0.8723 | 0.8683 |
| 0.3169 | 3.0 | 90 | 0.3668 | 0.8936 | 0.8924 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,569 |
connectivity/feather_berts_21 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_44 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
GioReg/mBERTnews | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: mBERTnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERTnews
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1136
- Accuracy: 0.9739
- F1: 0.9732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,168 |
danielhou13/longformer-finetuned_v2_cogs402 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
CH0KUN/autotrain-TNC_Data1000_wangchanBERTa-927730545 | [
"Applied Science",
"Arts",
"Belief & Thought",
"Commerce & Finance",
"History",
"Imaginative",
"Natural & Pure Science",
"Social Science "
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- CH0KUN/autotrain-data-TNC_Data1000_wangchanBERTa
co2_eq_emissions: 0.03882318406133382
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 927730545
- CO2 Emissions (in grams): 0.03882318406133382
## Validation Metrics
- Loss: 0.346664160490036
- Accuracy: 0.9212962962962963
- Macro F1: 0.9193830593356196
- Micro F1: 0.9212962962962963
- Weighted F1: 0.9213272351125573
- Macro Precision: 0.920255423800781
- Micro Precision: 0.9212962962962963
- Weighted Precision: 0.9231182355921642
- Macro Recall: 0.920208415963133
- Micro Recall: 0.9212962962962963
- Weighted Recall: 0.9212962962962963
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/CH0KUN/autotrain-TNC_Data1000_wangchanBERTa-927730545
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("CH0KUN/autotrain-TNC_Data1000_wangchanBERTa-927730545", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("CH0KUN/autotrain-TNC_Data1000_wangchanBERTa-927730545", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,452 |
Jeevesh8/lecun_feather_berts-14 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Gooogr/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | Entry not found | 15 |
BraveOni/2ch-text-classification | [
"0.0",
"1.0"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- BraveOni/autotrain-data-2ch-text-classification
co2_eq_emissions: 0.08564281067919652
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 955631800
- CO2 Emissions (in grams): 0.08564281067919652
## Validation Metrics
- Loss: 0.34108611941337585
- Accuracy: 0.8671983356449375
- Precision: 0.7883283877349159
- Recall: 0.8250517598343685
- AUC: 0.9236450689447471
- F1: 0.8062721294891249
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/BraveOni/autotrain-2ch-text-classification-955631800
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("BraveOni/autotrain-2ch-text-classification-955631800", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("BraveOni/autotrain-2ch-text-classification-955631800", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,236 |
annazdr/xlm-roberta-ecoicop-polish | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | Entry not found | 15 |
RomanCast/xlmr-miam-loria-finetuned | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | ---
language:
- fr
--- | 22 |
Jeevesh8/std_pnt_04_feather_berts-75 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-84 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-24 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-74 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-40 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
c17hawke/first-model | null | # First model | 13 |
course5i/SEAD-L-6_H-256_A-8-mnli | [
"0",
"1",
"2"
] | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- mnli
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-256_A-8-mnli
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **mnli** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_m-accuracy | eval_m-runtime | eval_m-samples_per_second | eval_m-steps_per_second | eval_m-loss | eval_m-samples | eval_mm-accuracy | eval_mm-runtime | eval_mm-samples_per_second | eval_mm-steps_per_second | eval_mm-loss | eval_mm-samples |
|:---------------:|:--------------:|:-------------------------:|:-----------------------:|:-----------:|:--------------:|:----------------:|:---------------:|:--------------------------:|:------------------------:|:------------:|:---------------:|
| 0.8277 | 6.4665 | 1517.828 | 47.476 | 0.6014 | 9815 | 0.8310 | 5.3528 | 1836.786 | 57.54 | 0.5724 | 9832 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
| 4,086 |
Pennywise881/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
mmeet611/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8628762541806019
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3052
- Accuracy: 0.8633
- F1: 0.8629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,521 |
Willy/bert-base-spanish-wwm-cased-finetuned-NLP-IE | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-NLP-IE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-NLP-IE
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6260
- Accuracy: 0.7015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6052 | 1.0 | 9 | 0.6370 | 0.7015 |
| 0.5501 | 2.0 | 18 | 0.6260 | 0.7015 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,464 |
S2312dal/M6_MLM_cross | [
"LABEL_0"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: M6_MLM_cross
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M6_MLM_cross
This model is a fine-tuned version of [S2312dal/M6_MLM](https://huggingface.co/S2312dal/M6_MLM) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0197
- Pearson: 0.9680
- Spearmanr: 0.9098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8.0
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0723 | 1.0 | 131 | 0.0646 | 0.8674 | 0.8449 |
| 0.0433 | 2.0 | 262 | 0.0322 | 0.9475 | 0.9020 |
| 0.0015 | 3.0 | 393 | 0.0197 | 0.9680 | 0.9098 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,585 |
deepesh0x/autotrain-mlsec-1013333734 | [
"negative",
"positive"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-mlsec
co2_eq_emissions: 308.7012650779217
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1013333734
- CO2 Emissions (in grams): 308.7012650779217
## Validation Metrics
- Loss: 0.20877738296985626
- Accuracy: 0.9396153846153846
- Precision: 0.9291791791791791
- Recall: 0.9518072289156626
- AUC: 0.9671522989580735
- F1: 0.9403570976320121
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-mlsec-1013333734
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-mlsec-1013333734", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-mlsec-1013333734", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,168 |
Elron/deberta-v3-large-sentiment | [
"0",
"1",
"2"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large
results: []
---
# deberta-v3-large-sentiment
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Model description
Test set results:
| Model | Emotion | Hate | Irony | Offensive | Sentiment |
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
| deberta-v3-large | **86.3** | **61.3** | **87.1** | **86.4** | **73.9** |
| BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 |
| RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 |
[source:papers_with_code](https://paperswithcode.com/sota/sentiment-analysis-on-tweeteval)
## Intended uses & limitations
Classifying attributes of interest on tweeter like data.
## Training and evaluation data
[tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Training procedure
Fine tuned and evaluated with [run_glue.py]()
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0614 | 0.07 | 100 | 1.0196 | 0.4345 |
| 0.8601 | 0.14 | 200 | 0.7561 | 0.6460 |
| 0.734 | 0.21 | 300 | 0.6796 | 0.6955 |
| 0.6753 | 0.28 | 400 | 0.6521 | 0.7000 |
| 0.6408 | 0.35 | 500 | 0.6119 | 0.7440 |
| 0.5991 | 0.42 | 600 | 0.6034 | 0.7370 |
| 0.6069 | 0.49 | 700 | 0.5976 | 0.7375 |
| 0.6122 | 0.56 | 800 | 0.5871 | 0.7425 |
| 0.5908 | 0.63 | 900 | 0.5935 | 0.7445 |
| 0.5884 | 0.7 | 1000 | 0.5792 | 0.7520 |
| 0.5839 | 0.77 | 1100 | 0.5780 | 0.7555 |
| 0.5772 | 0.84 | 1200 | 0.5727 | 0.7570 |
| 0.5895 | 0.91 | 1300 | 0.5601 | 0.7550 |
| 0.5757 | 0.98 | 1400 | 0.5613 | 0.7525 |
| 0.5121 | 1.05 | 1500 | 0.5867 | 0.7600 |
| 0.5254 | 1.12 | 1600 | 0.5595 | 0.7630 |
| 0.5074 | 1.19 | 1700 | 0.5594 | 0.7585 |
| 0.4947 | 1.26 | 1800 | 0.5697 | 0.7575 |
| 0.5019 | 1.33 | 1900 | 0.5665 | 0.7580 |
| 0.5005 | 1.4 | 2000 | 0.5484 | 0.7655 |
| 0.5125 | 1.47 | 2100 | 0.5626 | 0.7605 |
| 0.5241 | 1.54 | 2200 | 0.5561 | 0.7560 |
| 0.5198 | 1.61 | 2300 | 0.5602 | 0.7600 |
| 0.5124 | 1.68 | 2400 | 0.5654 | 0.7490 |
| 0.5096 | 1.75 | 2500 | 0.5803 | 0.7515 |
| 0.4885 | 1.82 | 2600 | 0.5889 | 0.75 |
| 0.5111 | 1.89 | 2700 | 0.5508 | 0.7665 |
| 0.4868 | 1.96 | 2800 | 0.5621 | 0.7635 |
| 0.4599 | 2.04 | 2900 | 0.5995 | 0.7615 |
| 0.4147 | 2.11 | 3000 | 0.6202 | 0.7530 |
| 0.4233 | 2.18 | 3100 | 0.5875 | 0.7625 |
| 0.4324 | 2.25 | 3200 | 0.5794 | 0.7610 |
| 0.4141 | 2.32 | 3300 | 0.5902 | 0.7460 |
| 0.4306 | 2.39 | 3400 | 0.6053 | 0.7545 |
| 0.4266 | 2.46 | 3500 | 0.5979 | 0.7570 |
| 0.4227 | 2.53 | 3600 | 0.5920 | 0.7650 |
| 0.4226 | 2.6 | 3700 | 0.6166 | 0.7455 |
| 0.3978 | 2.67 | 3800 | 0.6126 | 0.7560 |
| 0.3954 | 2.74 | 3900 | 0.6152 | 0.7550 |
| 0.4209 | 2.81 | 4000 | 0.5980 | 0.75 |
| 0.3982 | 2.88 | 4100 | 0.6096 | 0.7490 |
| 0.4016 | 2.95 | 4200 | 0.6541 | 0.7425 |
| 0.3966 | 3.02 | 4300 | 0.6377 | 0.7545 |
| 0.3074 | 3.09 | 4400 | 0.6860 | 0.75 |
| 0.3551 | 3.16 | 4500 | 0.6160 | 0.7550 |
| 0.3323 | 3.23 | 4600 | 0.6714 | 0.7520 |
| 0.3171 | 3.3 | 4700 | 0.6538 | 0.7535 |
| 0.3403 | 3.37 | 4800 | 0.6774 | 0.7465 |
| 0.3396 | 3.44 | 4900 | 0.6726 | 0.7465 |
| 0.3259 | 3.51 | 5000 | 0.6465 | 0.7480 |
| 0.3392 | 3.58 | 5100 | 0.6860 | 0.7460 |
| 0.3251 | 3.65 | 5200 | 0.6697 | 0.7495 |
| 0.3253 | 3.72 | 5300 | 0.6770 | 0.7430 |
| 0.3455 | 3.79 | 5400 | 0.7177 | 0.7360 |
| 0.3323 | 3.86 | 5500 | 0.6943 | 0.7400 |
| 0.3335 | 3.93 | 5600 | 0.6507 | 0.7555 |
| 0.3368 | 4.0 | 5700 | 0.6580 | 0.7485 |
| 0.2479 | 4.07 | 5800 | 0.7667 | 0.7430 |
| 0.2613 | 4.14 | 5900 | 0.7513 | 0.7505 |
| 0.2557 | 4.21 | 6000 | 0.7927 | 0.7485 |
| 0.243 | 4.28 | 6100 | 0.7792 | 0.7450 |
| 0.2473 | 4.35 | 6200 | 0.8107 | 0.7355 |
| 0.2447 | 4.42 | 6300 | 0.7851 | 0.7370 |
| 0.2515 | 4.49 | 6400 | 0.7529 | 0.7465 |
| 0.274 | 4.56 | 6500 | 0.7390 | 0.7465 |
| 0.2674 | 4.63 | 6600 | 0.7658 | 0.7460 |
| 0.2416 | 4.7 | 6700 | 0.7915 | 0.7485 |
| 0.2432 | 4.77 | 6800 | 0.7989 | 0.7435 |
| 0.2595 | 4.84 | 6900 | 0.7850 | 0.7380 |
| 0.2736 | 4.91 | 7000 | 0.7577 | 0.7395 |
| 0.2783 | 4.98 | 7100 | 0.7650 | 0.7405 |
| 0.2304 | 5.05 | 7200 | 0.8542 | 0.7385 |
| 0.1937 | 5.12 | 7300 | 0.8390 | 0.7345 |
| 0.1878 | 5.19 | 7400 | 0.9150 | 0.7330 |
| 0.1921 | 5.26 | 7500 | 0.8792 | 0.7405 |
| 0.1916 | 5.33 | 7600 | 0.8892 | 0.7410 |
| 0.2011 | 5.4 | 7700 | 0.9012 | 0.7325 |
| 0.211 | 5.47 | 7800 | 0.8608 | 0.7420 |
| 0.2194 | 5.54 | 7900 | 0.8852 | 0.7320 |
| 0.205 | 5.61 | 8000 | 0.8803 | 0.7385 |
| 0.1981 | 5.68 | 8100 | 0.8681 | 0.7330 |
| 0.1908 | 5.75 | 8200 | 0.9020 | 0.7435 |
| 0.1942 | 5.82 | 8300 | 0.8780 | 0.7410 |
| 0.1958 | 5.89 | 8400 | 0.8937 | 0.7345 |
| 0.1883 | 5.96 | 8500 | 0.9121 | 0.7360 |
| 0.1819 | 6.04 | 8600 | 0.9409 | 0.7430 |
| 0.145 | 6.11 | 8700 | 1.1390 | 0.7265 |
| 0.1696 | 6.18 | 8800 | 0.9189 | 0.7430 |
| 0.1488 | 6.25 | 8900 | 0.9718 | 0.7400 |
| 0.1637 | 6.32 | 9000 | 0.9702 | 0.7450 |
| 0.1547 | 6.39 | 9100 | 1.0033 | 0.7410 |
| 0.1605 | 6.46 | 9200 | 0.9973 | 0.7355 |
| 0.1552 | 6.53 | 9300 | 1.0491 | 0.7290 |
| 0.1731 | 6.6 | 9400 | 1.0271 | 0.7335 |
| 0.1738 | 6.67 | 9500 | 0.9575 | 0.7430 |
| 0.1669 | 6.74 | 9600 | 0.9614 | 0.7350 |
| 0.1347 | 6.81 | 9700 | 1.0263 | 0.7365 |
| 0.1593 | 6.88 | 9800 | 1.0173 | 0.7360 |
| 0.1549 | 6.95 | 9900 | 1.0398 | 0.7350 |
| 0.1675 | 7.02 | 10000 | 0.9975 | 0.7380 |
| 0.1182 | 7.09 | 10100 | 1.1059 | 0.7350 |
| 0.1351 | 7.16 | 10200 | 1.0933 | 0.7400 |
| 0.1496 | 7.23 | 10300 | 1.0731 | 0.7355 |
| 0.1197 | 7.3 | 10400 | 1.1089 | 0.7360 |
| 0.1111 | 7.37 | 10500 | 1.1381 | 0.7405 |
| 0.1494 | 7.44 | 10600 | 1.0252 | 0.7425 |
| 0.1235 | 7.51 | 10700 | 1.0906 | 0.7360 |
| 0.133 | 7.58 | 10800 | 1.1796 | 0.7375 |
| 0.1248 | 7.65 | 10900 | 1.1332 | 0.7420 |
| 0.1268 | 7.72 | 11000 | 1.1304 | 0.7415 |
| 0.1368 | 7.79 | 11100 | 1.1345 | 0.7380 |
| 0.1228 | 7.86 | 11200 | 1.2018 | 0.7320 |
| 0.1281 | 7.93 | 11300 | 1.1884 | 0.7350 |
| 0.1449 | 8.0 | 11400 | 1.1571 | 0.7345 |
| 0.1025 | 8.07 | 11500 | 1.1538 | 0.7345 |
| 0.1199 | 8.14 | 11600 | 1.2113 | 0.7390 |
| 0.1016 | 8.21 | 11700 | 1.2882 | 0.7370 |
| 0.114 | 8.28 | 11800 | 1.2872 | 0.7390 |
| 0.1019 | 8.35 | 11900 | 1.2876 | 0.7380 |
| 0.1142 | 8.42 | 12000 | 1.2791 | 0.7385 |
| 0.1135 | 8.49 | 12100 | 1.2883 | 0.7380 |
| 0.1139 | 8.56 | 12200 | 1.2829 | 0.7360 |
| 0.1107 | 8.63 | 12300 | 1.2698 | 0.7365 |
| 0.1183 | 8.7 | 12400 | 1.2660 | 0.7345 |
| 0.1064 | 8.77 | 12500 | 1.2889 | 0.7365 |
| 0.0895 | 8.84 | 12600 | 1.3480 | 0.7330 |
| 0.1244 | 8.91 | 12700 | 1.2872 | 0.7325 |
| 0.1209 | 8.98 | 12800 | 1.2681 | 0.7375 |
| 0.1144 | 9.05 | 12900 | 1.2711 | 0.7370 |
| 0.1034 | 9.12 | 13000 | 1.2801 | 0.7360 |
| 0.113 | 9.19 | 13100 | 1.2801 | 0.7350 |
| 0.0994 | 9.26 | 13200 | 1.2920 | 0.7360 |
| 0.0966 | 9.33 | 13300 | 1.2761 | 0.7335 |
| 0.0939 | 9.4 | 13400 | 1.2909 | 0.7365 |
| 0.0975 | 9.47 | 13500 | 1.2953 | 0.7360 |
| 0.0842 | 9.54 | 13600 | 1.3179 | 0.7335 |
| 0.0871 | 9.61 | 13700 | 1.3149 | 0.7385 |
| 0.1162 | 9.68 | 13800 | 1.3124 | 0.7350 |
| 0.085 | 9.75 | 13900 | 1.3207 | 0.7355 |
| 0.0966 | 9.82 | 14000 | 1.3248 | 0.7335 |
| 0.1064 | 9.89 | 14100 | 1.3261 | 0.7335 |
| 0.1046 | 9.96 | 14200 | 1.3255 | 0.7360 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
| 10,871 |
Parsa/LD50-prediction | [
"LABEL_0"
] | Toxicity LD50 prediction (regression model) based on <a href = "https://tdcommons.ai/single_pred_tasks/tox/"> Acute Toxicity LD50 </a> dataset.
For now, for the purpose of prediction, download the model. In the future, an easy colab notebook will be available. | 263 |
sarahmiller137/bioclinical-bert-ft-m3-lc | null | ---
language:
- en
thumbnail: "url to a thumbnail used in social sharing"
tags:
- 'text classification'
license: cc
datasets:
- MIMIC-III
---
## Model information:
This model is the [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) model that has been finetuned using radiology report texts from the MIMIC-III database. The task performed was text classification in order to benchmark this model with a selection of other variants of BERT for the classifcation of MIMIC-III radiology report texts into two classes. Labels of [0,1] were assigned to radiology reports in MIMIC-III that were linked to an ICD9 diagnosis code for lung cancer = 1 and a random sample of reports which were not linked to any type of cancer diagnosis code at all = 0.
## Intended uses:
This model is intended to be used to classify texts to identify the presence of lung cancer. The model will predict lables of [0,1].
## Limitations:
Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before use -
- [MIMIC-III](https://www.nature.com/articles/sdata201635.pdf)
- [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT)
## How to use:
Load the model from the library using the following checkpoints:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/bioclinical-bert-ft-m3-lc")
model = AutoModel.from_pretrained("sarahmiller137/bioclinical-bert-ft-m3-lc")
```
| 1,623 |
leminhds/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1677
- eval_accuracy: 0.924
- eval_f1: 0.9238
- eval_runtime: 2.5188
- eval_samples_per_second: 794.026
- eval_steps_per_second: 12.704
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,315 |
pollner/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877887788778878
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3183
- Accuracy: 0.8767
- F1: 0.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,520 |
danielreales00/results | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
| 996 |
xliu128/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2891 | 0.7429 |
| 3.7868 | 2.0 | 636 | 1.8755 | 0.8374 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6928 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9184 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,890 |
morenolq/thext-bio-scibert | [
"LABEL_0"
] | ---
language: "en"
tags:
- bert
- regression
- pytorch
pipeline:
- text-classification
widget:
- text: "We propose a new approach, based on Transformer-based encoding, to highlight extraction. To the best of our knowledge, this is the first attempt to use transformer architectures to address automatic highlight generation. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
- text: "We design a context-aware sentence-level regressor, in which the semantic similarity between candidate sentences and highlights is estimated by also attending the contextual knowledge provided by the other paper sections. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
- text: "Fig. 2, Fig. 3, Fig. 4 show the effect of varying the number K of selected highlights on the extraction performance. As expected, recall values increase while increasing the number of selected highlights, whereas precision values show an opposite trend. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
---
# General Information
This model is trained on journal publications of belonging to the domain: **Biology and Medicine**.
This is an `allenai/scibert_scivocab_cased` model trained in the scientific domain. The model is trained with regression objective to estimate the relevance of a sentence according to the provided context (e.g., the abstract of the scientific paper).
The model is used in the paper 'Transformer-based highlights extraction from scientific papers' published in Knowledge-Based Systems scientific journal.
The model is able to achieve state-of-the-art performance in the task of highlights extraction from scientific papers.
Access to the full paper: [here](https://doi.org/10.1016/j.knosys.2022.109382).
# Usage:
For detailed usage please use the official repository https://github.com/MorenoLaQuatra/THExt .
# References:
If you find it useful, please cite the following paper:
```bibtex
@article{thext,
title={Transformer-based highlights extraction from scientific papers},
author={La Quatra, Moreno and Cagliero, Luca},
journal={Knowledge-Based Systems},
pages={109382},
year={2022},
publisher={Elsevier}
}
``` | 5,094 |
krupper/autotrain-text-complexity-classification-1125541240 | null | 0 | |
domenicrosati/SPECTER-finetuned-DAGPap22 | null | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SPECTER-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SPECTER-finetuned-DAGPap22
This model is a fine-tuned version of [allenai/specter](https://huggingface.co/allenai/specter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0023
- Accuracy: 0.9993
- F1: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.3422 | 1.0 | 669 | 0.4135 | 0.8914 | 0.9140 |
| 0.1074 | 2.0 | 1338 | 0.1216 | 0.9746 | 0.9811 |
| 0.0329 | 3.0 | 2007 | 0.0064 | 0.9989 | 0.9992 |
| 0.0097 | 4.0 | 2676 | 0.0132 | 0.9972 | 0.9980 |
| 0.0123 | 5.0 | 3345 | 0.0231 | 0.9961 | 0.9971 |
| 0.0114 | 6.0 | 4014 | 0.0080 | 0.9985 | 0.9989 |
| 0.0029 | 7.0 | 4683 | 0.2207 | 0.9727 | 0.9797 |
| 0.0075 | 8.0 | 5352 | 0.0145 | 0.9974 | 0.9981 |
| 0.0098 | 9.0 | 6021 | 0.0047 | 0.9994 | 0.9996 |
| 0.0025 | 10.0 | 6690 | 0.0000 | 1.0 | 1.0 |
| 0.0044 | 11.0 | 7359 | 0.0035 | 0.9993 | 0.9995 |
| 0.0 | 12.0 | 8028 | 0.0027 | 0.9996 | 0.9997 |
| 0.0027 | 13.0 | 8697 | 0.0036 | 0.9993 | 0.9995 |
| 0.0055 | 14.0 | 9366 | 0.0017 | 0.9998 | 0.9999 |
| 0.0 | 15.0 | 10035 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 16.0 | 10704 | 0.0000 | 1.0 | 1.0 |
| 0.0022 | 17.0 | 11373 | 0.0111 | 0.9981 | 0.9986 |
| 0.0004 | 18.0 | 12042 | 0.0011 | 0.9994 | 0.9996 |
| 0.0 | 19.0 | 12711 | 0.0020 | 0.9994 | 0.9996 |
| 0.0 | 20.0 | 13380 | 0.0023 | 0.9993 | 0.9995 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 2,844 |
jhonparra18/facebook-data2vec-text-base-fine-tuning-cvs-hf-studio-name | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
PGT/nystromformer-s-artificial-balanced-max500-490000-0 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | Entry not found | 15 |
anahitapld/dbd_electra | null | ---
license: apache-2.0
---
| 28 |
anahitapld/dbd_Roberta | null | ---
license: apache-2.0
---
| 28 |
okho0653/Bio_ClinicalBERT-zero-shot-finetuned-50cad-50noncad | null | Entry not found | 15 |
Alireza1044/albert-base-v2-qnli | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metric:
name: Accuracy
type: accuracy
value: 0.9137836353651839
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3608
- Accuracy: 0.9138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| 1,371 |
AnonymousSub/EManuals_BERT_copy_wikiqa | null | Entry not found | 15 |
AnonymousSub/consert-emanuals-s10-SR | null | Entry not found | 15 |
CLTL/icf-levels-enr | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Energy Levels (ICF b1300)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing energy level. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about energy level in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with the energy level.
3 | Slight fatigue that causes mild limitations.
2 | Moderate fatigue; the patient gets easily tired from light activities or needs a long time to recover after an activity.
1 | Severe fatigue; the patient is capable of very little.
0 | Very severe fatigue; unable to do anything and mostly lays in bed.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-enr',
use_cuda=False,
)
example = 'Al jaren extreme vermoeidheid overdag, valt overdag in slaap tijdens school- en werkactiviteiten en soms zelfs tijdens een gesprek.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.98
```
The raw outputs look like this:
```
[[1.97520316]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.48 | 0.43
mean squared error | 0.49 | 0.42
root mean squared error | 0.70 | 0.65
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| 3,260 |
CLTL/icf-levels-stm | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Emotional Functioning Levels (ICF b152)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing emotional functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about emotional functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with emotional functioning: emotions are appropriate, well regulated, etc.
3 | Slight problem with emotional functioning: irritable, gloomy, etc.
2 | Moderate problem with emotional functioning: negative emotions, such as fear, anger, sadness, etc.
1 | Severe problem with emotional functioning: intense negative emotions, such as fear, anger, sadness, etc.
0 | Flat affect, apathy, unstable, inappropriate emotions.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-stm',
use_cuda=False,
)
example = 'Naarmate het somatische beeld een herstellende trend laat zien, valt op dat patient zich depressief en suicidaal uit.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.60
```
The raw outputs look like this:
```
[[1.60418844]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.76 | 0.68
mean squared error | 1.03 | 0.87
root mean squared error | 1.01 | 0.93
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| 3,364 |
Cathy/reranking_model | [
"contradiction",
"neutral",
"entailment"
] | Entry not found | 15 |
CleveGreen/JobClassifier_v2 | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_100",
"LABEL_101",
"LABEL_102",
"LABEL_103",
"LABEL_104",
"LABEL_105",
"LABEL_106",
"LABEL_107",
"LABEL_108",
"LABEL_109",
"LABEL_11",
"LABEL_110",
"LABEL_111",
"LABEL_112",
"LABEL_113",
"LABEL_114",
"LABEL_115",
"LABEL_116",
"LABEL_... | Entry not found | 15 |
Davlan/naija-twitter-sentiment-afriberta-large | [
"negative",
"neutral",
"positive"
] | Hugging Face's logo
---
language:
- hau
- ibo
- pcm
- yor
- multilingual
---
# naija-twitter-sentiment-afriberta-large
## Model description
**naija-twitter-sentiment-afriberta-large** is the first multilingual twitter **sentiment classification** model for four (4) Nigerian languages (Hausa, Igbo, Nigerian Pidgin, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model.
It achieves the **state-of-the-art performance** for the twitter sentiment classification task trained on the [NaijaSenti corpus](https://github.com/hausanlp/NaijaSenti).
The model has been trained to classify tweets into 3 sentiment classes: negative, neutral and positive
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of 4 Nigerian language datasets obtained from [NaijaSenti](https://github.com/hausanlp/NaijaSenti) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for Sentiment Classification.
```python
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
MODEL = "Davlan/naija-twitter-sentiment-afriberta-large"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
text = "I like you"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
id2label = {0:"positive", 1:"neutral", 2:"negative"}
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
#### Limitations and bias
This model is limited by its training dataset and domain i.e Twitter. This may not generalize well for all use cases in different domains.
## Training procedure
This model was trained on a single Nvidia RTX 2080 GPU with recommended hyperparameters from the [original NaijaSenti paper](https://arxiv.org/abs/2201.08277).
## Eval results on Test set (F-score), average over 5 runs.
language|F1-score
-|-
hau |81.2
ibo |80.8
pcm |74.5
yor |80.4
### BibTeX entry and citation info
```
@inproceedings{Muhammad2022NaijaSentiAN,
title={NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis},
author={Shamsuddeen Hassan Muhammad and David Ifeoluwa Adelani and Sebastian Ruder and Ibrahim Said Ahmad and Idris Abdulmumin and Bello Shehu Bello and Monojit Choudhury and Chris C. Emezue and Saheed Salahudeen Abdullahi and Anuoluwapo Aremu and Alipio Jeorge and Pavel B. Brazdil},
year={2022}
}
```
| 2,710 |
EMBEDDIA/english-tweetsentiment | [
"Negative",
"Neutral",
"Positive"
] | Entry not found | 15 |
Herais/pred_genre | [
"传奇",
"传记",
"其它",
"军旅",
"农村",
"宫廷",
"武打",
"涉案",
"神话",
"科幻",
"都市",
"青少",
"革命"
] | ---
language:
- zh
tags:
- classification
license: apache-2.0
datasets:
- Custom
metrics:
- rouge
---
This model predicts the time period given a synopsis of about 200 Chinese characters.
The model is trained on TV and Movie datasets and takes simplified Chinese as input.
We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint.
#### Sample Usage
from transformers import BertTokenizer, BertForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
checkpoint = "Herais/pred_genre"
tokenizer = BertTokenizer.from_pretrained(checkpoint,
problem_type="single_label_classification")
model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)
label2id_genre = {'涉案': 7, '都市': 10, '革命': 12, '农村': 4, '传奇': 0,
'其它': 2, '传记': 1, '青少': 11, '军旅': 3, '武打': 6,
'科幻': 9, '神话': 8, '宫廷': 5}
id2label_genre = {7: '涉案', 10: '都市', 12: '革命', 4: '农村', 0: '传奇',
2: '其它', 1: '传记', 11: '青少', 3: '军旅', 6: '武打',
9: '科幻', 8: '神话', 5: '宫廷'}
synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\
他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\
成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\
为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\
也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\
继续为检察事业贡献自己的青春。 """
inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')
model.eval()
outputs = model(**input)
label_ids_pred = torch.argmax(outputs.logits, dim=1).to('cpu').numpy()
labels_pred = [id2label_timeperiod[label] for label in labels_pred]
print(labels_pred)
# ['涉案']
Citation
TBA | 1,820 |
ItuThesis2022MlviNikw/deberta-v3-base | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
JBNLRY/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5471613867597194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8366
- Matthews Correlation: 0.5472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5224 | 1.0 | 535 | 0.5432 | 0.4243 |
| 0.3447 | 2.0 | 1070 | 0.4968 | 0.5187 |
| 0.2347 | 3.0 | 1605 | 0.6540 | 0.5280 |
| 0.1747 | 4.0 | 2140 | 0.7547 | 0.5367 |
| 0.1255 | 5.0 | 2675 | 0.8366 | 0.5472 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 2,000 |
Jeska/VaccinChatSentenceClassifierDutch_fromBERTjeDIAL | [
"chitchat_ask_bye",
"chitchat_ask_hi",
"chitchat_ask_hi_de",
"chitchat_ask_hi_en",
"chitchat_ask_hi_fr",
"chitchat_ask_hoe_gaat_het",
"chitchat_ask_name",
"chitchat_ask_thanks",
"faq_ask_aantal_gevaccineerd",
"faq_ask_aantal_gevaccineerd_wereldwijd",
"faq_ask_afspraak_afzeggen",
"faq_ask_afspr... | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
This model is a fine-tuned version of [Jeska/BertjeWDialDataQA20k](https://huggingface.co/Jeska/BertjeWDialDataQA20k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8355
- Accuracy: 0.6322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.4418 | 1.0 | 1457 | 2.3866 | 0.5406 |
| 1.7742 | 2.0 | 2914 | 1.9365 | 0.6069 |
| 1.1313 | 3.0 | 4371 | 1.8355 | 0.6322 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,512 |
JonatanGk/roberta-base-ca-finetuned-hate-speech-offensive-catalan | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-ca-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ca-finetuned-mnli
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4137
- Accuracy: 0.8778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3699 | 1.0 | 1255 | 0.3712 | 0.8669 |
| 0.3082 | 2.0 | 2510 | 0.3401 | 0.8766 |
| 0.2375 | 3.0 | 3765 | 0.4137 | 0.8778 |
| 0.1889 | 4.0 | 5020 | 0.4671 | 0.8733 |
| 0.1486 | 5.0 | 6275 | 0.5205 | 0.8749 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| 1,614 |
Kao/samyarn-bert-base-multilingual-cased | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | samyarn-bert-base-multilingual-cased
kao | 40 |
Lazaro97/results | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.8404
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3793
- Accuracy: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3542 | 1.0 | 125 | 0.3611 | 0.839 |
| 0.2255 | 2.0 | 250 | 0.3793 | 0.8404 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| 1,673 |
Lumos/imdb2 | null | Entry not found | 15 |
Lumos/yahoo2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
M47Labs/arabert_multiclass_news | [
"culture",
"finance",
"medical",
"politics",
"religion",
"sports",
"tech"
] | Entry not found | 15 |
MINYOUNG/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5494735380761103
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8540
- Matthews Correlation: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5219 | 1.0 | 535 | 0.5314 | 0.4095 |
| 0.346 | 2.0 | 1070 | 0.5141 | 0.5054 |
| 0.2294 | 3.0 | 1605 | 0.6351 | 0.5200 |
| 0.1646 | 4.0 | 2140 | 0.7575 | 0.5459 |
| 0.1235 | 5.0 | 2675 | 0.8540 | 0.5495 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| 1,999 |
Maha/OGBV-gender-twtrobertabase-en-davidson | null | Entry not found | 15 |
MickyMike/7-GPT2SP-jirasoftware | [
"LABEL_0"
] | Entry not found | 15 |
Parsa/BBB_prediction_classification_SMILES | null | A fine-tuned model based on'DeepChem/ChemBERTa-77M-MLM'for Blood brain barrier permeability prediction based on SMILES string. There are also BiLSTM models available as well as these two models in 'https://github.com/mephisto121/BBBNLP if you want to check them all and check the codes too.
[](https://colab.research.google.com/drive/1jGYf3sq93yO4EbgVaEl3nlClrVatVaXS#scrollTo=AMEdQItmilAw) | 465 |
SCORE/claim3a-distilbert-base-uncased | null | Entry not found | 15 |
Sakil/IMDB_URDUSENTIMENT_MODEL | null | ---
language:
- en
tags:
- text Classification
license: apache-2.0
widget:
- text: "میں تمہیں پسند کرتا ہوں. </s></s> میں تم سے پیار کرتا ہوں."
---
* IMDB_URDUSENTIMENT_MODEL
I have used IMDB URDU dataset to create custom model by using DistilBertForSequenceClassification. | 285 |
SetFit/deberta-v3-base__sst2__all-train | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-base__sst2__all-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base__sst2__all-train
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6964
- Accuracy: 0.49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.6964 | 0.49 |
| No log | 2.0 | 14 | 0.7010 | 0.49 |
| No log | 3.0 | 21 | 0.7031 | 0.49 |
| No log | 4.0 | 28 | 0.7054 | 0.49 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 1,592 |
SetFit/deberta-v3-large__sst2__train-16-4 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-4
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6329
- Accuracy: 0.6392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6945 | 1.0 | 7 | 0.7381 | 0.2857 |
| 0.7072 | 2.0 | 14 | 0.7465 | 0.2857 |
| 0.6548 | 3.0 | 21 | 0.7277 | 0.4286 |
| 0.5695 | 4.0 | 28 | 0.6738 | 0.5714 |
| 0.4615 | 5.0 | 35 | 0.8559 | 0.5714 |
| 0.0823 | 6.0 | 42 | 1.0983 | 0.5714 |
| 0.0274 | 7.0 | 49 | 1.9937 | 0.5714 |
| 0.0106 | 8.0 | 56 | 2.2209 | 0.5714 |
| 0.0039 | 9.0 | 63 | 2.2114 | 0.5714 |
| 0.0031 | 10.0 | 70 | 2.2808 | 0.5714 |
| 0.0013 | 11.0 | 77 | 2.3707 | 0.5714 |
| 0.0008 | 12.0 | 84 | 2.4902 | 0.5714 |
| 0.0005 | 13.0 | 91 | 2.5208 | 0.5714 |
| 0.0007 | 14.0 | 98 | 2.5683 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,216 |
SetFit/deberta-v3-large__sst2__train-16-5 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-5
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5433
- Accuracy: 0.7924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6774 | 1.0 | 7 | 0.7450 | 0.2857 |
| 0.7017 | 2.0 | 14 | 0.7552 | 0.2857 |
| 0.6438 | 3.0 | 21 | 0.7140 | 0.4286 |
| 0.3525 | 4.0 | 28 | 0.5570 | 0.7143 |
| 0.2061 | 5.0 | 35 | 0.5303 | 0.8571 |
| 0.0205 | 6.0 | 42 | 0.6706 | 0.8571 |
| 0.0068 | 7.0 | 49 | 0.8284 | 0.8571 |
| 0.0029 | 8.0 | 56 | 0.9281 | 0.8571 |
| 0.0015 | 9.0 | 63 | 0.9871 | 0.8571 |
| 0.0013 | 10.0 | 70 | 1.0208 | 0.8571 |
| 0.0008 | 11.0 | 77 | 1.0329 | 0.8571 |
| 0.0005 | 12.0 | 84 | 1.0348 | 0.8571 |
| 0.0004 | 13.0 | 91 | 1.0437 | 0.8571 |
| 0.0005 | 14.0 | 98 | 1.0512 | 0.8571 |
| 0.0004 | 15.0 | 105 | 1.0639 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,278 |
SetFit/distilbert-base-uncased__ethos_binary__all-train | [
"hate speech",
"no hate speech"
] | Entry not found | 15 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-1 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0606
- Accuracy: 0.4745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0941 | 1.0 | 19 | 1.1045 | 0.2 |
| 0.9967 | 2.0 | 38 | 1.1164 | 0.35 |
| 0.8164 | 3.0 | 57 | 1.1570 | 0.4 |
| 0.5884 | 4.0 | 76 | 1.2403 | 0.35 |
| 0.3322 | 5.0 | 95 | 1.3815 | 0.35 |
| 0.156 | 6.0 | 114 | 1.8102 | 0.3 |
| 0.0576 | 7.0 | 133 | 2.1439 | 0.4 |
| 0.0227 | 8.0 | 152 | 2.4368 | 0.3 |
| 0.0133 | 9.0 | 171 | 2.5994 | 0.4 |
| 0.009 | 10.0 | 190 | 2.7388 | 0.35 |
| 0.0072 | 11.0 | 209 | 2.8287 | 0.35 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,079 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-5 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1327
- Accuracy: 0.57
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0972 | 1.0 | 19 | 1.0470 | 0.45 |
| 0.9738 | 2.0 | 38 | 0.9244 | 0.65 |
| 0.7722 | 3.0 | 57 | 0.8612 | 0.65 |
| 0.4929 | 4.0 | 76 | 0.6759 | 0.75 |
| 0.2435 | 5.0 | 95 | 0.7273 | 0.7 |
| 0.0929 | 6.0 | 114 | 0.6444 | 0.85 |
| 0.0357 | 7.0 | 133 | 0.7671 | 0.8 |
| 0.0173 | 8.0 | 152 | 0.7599 | 0.75 |
| 0.0121 | 9.0 | 171 | 0.8140 | 0.8 |
| 0.0081 | 10.0 | 190 | 0.7861 | 0.8 |
| 0.0066 | 11.0 | 209 | 0.8318 | 0.8 |
| 0.0057 | 12.0 | 228 | 0.8777 | 0.8 |
| 0.0053 | 13.0 | 247 | 0.8501 | 0.8 |
| 0.004 | 14.0 | 266 | 0.8603 | 0.8 |
| 0.004 | 15.0 | 285 | 0.8787 | 0.8 |
| 0.0034 | 16.0 | 304 | 0.8969 | 0.8 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,387 |
SetFit/distilbert-base-uncased__sst2__train-16-1 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6012
- Accuracy: 0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6983 | 1.0 | 7 | 0.7036 | 0.2857 |
| 0.6836 | 2.0 | 14 | 0.7181 | 0.2857 |
| 0.645 | 3.0 | 21 | 0.7381 | 0.2857 |
| 0.5902 | 4.0 | 28 | 0.7746 | 0.2857 |
| 0.5799 | 5.0 | 35 | 0.7242 | 0.5714 |
| 0.3584 | 6.0 | 42 | 0.6935 | 0.5714 |
| 0.2596 | 7.0 | 49 | 0.7041 | 0.5714 |
| 0.1815 | 8.0 | 56 | 0.5930 | 0.7143 |
| 0.0827 | 9.0 | 63 | 0.6976 | 0.7143 |
| 0.0613 | 10.0 | 70 | 0.7346 | 0.7143 |
| 0.0356 | 11.0 | 77 | 0.6992 | 0.5714 |
| 0.0158 | 12.0 | 84 | 0.7328 | 0.5714 |
| 0.013 | 13.0 | 91 | 0.7819 | 0.5714 |
| 0.0103 | 14.0 | 98 | 0.8589 | 0.5714 |
| 0.0087 | 15.0 | 105 | 0.9177 | 0.5714 |
| 0.0076 | 16.0 | 112 | 0.9519 | 0.5714 |
| 0.0078 | 17.0 | 119 | 0.9556 | 0.5714 |
| 0.006 | 18.0 | 126 | 0.9542 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,479 |
SetFit/distilbert-base-uncased__sst2__train-16-5 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6537
- Accuracy: 0.6332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6925 | 1.0 | 7 | 0.6966 | 0.2857 |
| 0.6703 | 2.0 | 14 | 0.7045 | 0.2857 |
| 0.6404 | 3.0 | 21 | 0.7205 | 0.2857 |
| 0.555 | 4.0 | 28 | 0.7548 | 0.2857 |
| 0.5179 | 5.0 | 35 | 0.6745 | 0.5714 |
| 0.3038 | 6.0 | 42 | 0.7260 | 0.5714 |
| 0.2089 | 7.0 | 49 | 0.8016 | 0.5714 |
| 0.1303 | 8.0 | 56 | 0.8202 | 0.5714 |
| 0.0899 | 9.0 | 63 | 0.9966 | 0.5714 |
| 0.0552 | 10.0 | 70 | 1.1887 | 0.5714 |
| 0.0333 | 11.0 | 77 | 1.2163 | 0.5714 |
| 0.0169 | 12.0 | 84 | 1.2874 | 0.5714 |
| 0.0136 | 13.0 | 91 | 1.3598 | 0.5714 |
| 0.0103 | 14.0 | 98 | 1.4237 | 0.5714 |
| 0.0089 | 15.0 | 105 | 1.4758 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,293 |
SetFit/distilbert-base-uncased__subj__train-8-0 | [
"objective",
"subjective"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4440
- Accuracy: 0.789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 1.0 | 3 | 0.6868 | 0.5 |
| 0.6683 | 2.0 | 6 | 0.6804 | 0.75 |
| 0.6375 | 3.0 | 9 | 0.6702 | 0.75 |
| 0.5997 | 4.0 | 12 | 0.6686 | 0.75 |
| 0.5345 | 5.0 | 15 | 0.6720 | 0.75 |
| 0.4673 | 6.0 | 18 | 0.6646 | 0.75 |
| 0.4214 | 7.0 | 21 | 0.6494 | 0.75 |
| 0.3439 | 8.0 | 24 | 0.6313 | 0.75 |
| 0.3157 | 9.0 | 27 | 0.6052 | 0.75 |
| 0.2329 | 10.0 | 30 | 0.5908 | 0.75 |
| 0.1989 | 11.0 | 33 | 0.5768 | 0.75 |
| 0.1581 | 12.0 | 36 | 0.5727 | 0.75 |
| 0.1257 | 13.0 | 39 | 0.5678 | 0.75 |
| 0.1005 | 14.0 | 42 | 0.5518 | 0.75 |
| 0.0836 | 15.0 | 45 | 0.5411 | 0.75 |
| 0.0611 | 16.0 | 48 | 0.5320 | 0.75 |
| 0.0503 | 17.0 | 51 | 0.5299 | 0.75 |
| 0.0407 | 18.0 | 54 | 0.5368 | 0.75 |
| 0.0332 | 19.0 | 57 | 0.5455 | 0.75 |
| 0.0293 | 20.0 | 60 | 0.5525 | 0.75 |
| 0.0254 | 21.0 | 63 | 0.5560 | 0.75 |
| 0.0231 | 22.0 | 66 | 0.5569 | 0.75 |
| 0.0201 | 23.0 | 69 | 0.5572 | 0.75 |
| 0.0179 | 24.0 | 72 | 0.5575 | 0.75 |
| 0.0184 | 25.0 | 75 | 0.5547 | 0.75 |
| 0.0148 | 26.0 | 78 | 0.5493 | 0.75 |
| 0.0149 | 27.0 | 81 | 0.5473 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 3,034 |
SharanSMenon/22-languages-bert-base-cased | [
"Arabic",
"Chinese",
"Latin",
"Persian",
"Portugese",
"Pushto",
"Romanian",
"Russian",
"Spanish",
"Swedish",
"Tamil",
"Thai",
"Dutch",
"Turkish",
"Urdu",
"English",
"Estonian",
"French",
"Hindi",
"Indonesian",
"Japanese",
"Korean"
] | ---
metrics:
- accuracy
widget:
- text: "In war resolution, in defeat defiance, in victory magnanimity"
- text: "en la guerra resolución en la derrota desafío en la victoria magnanimidad"
---
[](https://colab.research.google.com/drive/1dqeUwS_DZ-urrmYzB29nTCBUltwJxhbh?usp=sharing)
# 22 Language Identifier - BERT
This model is trained to identify the following 22 different languages.
- Arabic
- Chinese
- Dutch
- English
- Estonian
- French
- Hindi
- Indonesian
- Japanese
- Korean
- Latin
- Persian
- Portugese
- Pushto
- Romanian
- Russian
- Spanish
- Swedish
- Tamil
- Thai
- Turkish
- Urdu
## Loading the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("SharanSMenon/22-languages-bert-base-cased")
model = AutoModelForSequenceClassification.from_pretrained("SharanSMenon/22-languages-bert-base-cased")
```
## Inference
```python
def predict(sentence):
tokenized = tokenizer(sentence, return_tensors="pt")
outputs = model(**tokenized)
return model.config.id2label[outputs.logits.argmax(dim=1).item()]
```
### Examples
```python
sentence1 = "in war resolution, in defeat defiance, in victory magnanimity"
predict(sentence1) # English
sentence2 = "en la guerra resolución en la derrota desafío en la victoria magnanimidad"
predict(sentence2) # Spanish
sentence3 = "هذا هو أعظم إله على الإطلاق"
predict(sentence3) # Arabic
``` | 1,526 |
Tejas3/distillbert_110_uncased_movie_genre | [
"action",
"drama",
"horror",
"sci_fi",
"superhero",
"thriller"
] | Entry not found | 15 |
TransQuest/monotransquest-hter-en_lv-it-nmt | [
"LABEL_0"
] | ---
language: en-lv
tags:
- Quality Estimation
- monotransquest
- hter
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_lv-it-nmt", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 5,407 |
aXhyra/demo_irony_31415 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_irony_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.685764300192161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_irony_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2905
- F1: 0.6858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.7735294032820418e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 358 | 0.5872 | 0.6786 |
| 0.5869 | 2.0 | 716 | 0.6884 | 0.6952 |
| 0.3417 | 3.0 | 1074 | 0.9824 | 0.6995 |
| 0.3417 | 4.0 | 1432 | 1.2905 | 0.6858 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,757 |
aXhyra/irony_trained | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: irony_trained
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.6851011633121422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irony_trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6471
- F1: 0.6851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.6774391860025942e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6589 | 1.0 | 716 | 0.6187 | 0.6646 |
| 0.5494 | 2.0 | 1432 | 0.9314 | 0.6793 |
| 0.3369 | 3.0 | 2148 | 1.3468 | 0.6833 |
| 0.2129 | 4.0 | 2864 | 1.6471 | 0.6851 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,752 |
aXhyra/presentation_emotion_1234567 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_emotion_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7272977042723248
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_emotion_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0237
- F1: 0.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.18796906442746e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1189 | 1.0 | 408 | 0.6827 | 0.7164 |
| 1.0678 | 2.0 | 816 | 0.6916 | 0.7396 |
| 0.6582 | 3.0 | 1224 | 0.9281 | 0.7276 |
| 0.0024 | 4.0 | 1632 | 1.0237 | 0.7273 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,788 |
aXhyra/presentation_emotion_31415 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_emotion_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7148501877297316
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_emotion_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1243
- F1: 0.7149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.18796906442746e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.73 | 1.0 | 408 | 0.8206 | 0.6491 |
| 0.3868 | 2.0 | 816 | 0.7733 | 0.7230 |
| 0.0639 | 3.0 | 1224 | 0.9962 | 0.7101 |
| 0.0507 | 4.0 | 1632 | 1.1243 | 0.7149 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,782 |
aXhyra/presentation_sentiment_42 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_sentiment_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7175864613336908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_sentiment_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6491
- F1: 0.7176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.923967812567773e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4391 | 1.0 | 2851 | 0.6591 | 0.6953 |
| 0.6288 | 2.0 | 5702 | 0.6265 | 0.7158 |
| 0.4071 | 3.0 | 8553 | 0.6401 | 0.7179 |
| 0.6532 | 4.0 | 11404 | 0.6491 | 0.7176 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,788 |
adamlin/filter | [
"LABEL_0"
] | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model_index:
- name: filter
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# filter
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the GLUE STSB dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.9.0
- Tokenizers 0.10.3
| 1,240 |
adamlin/ml999_matal_bed | [
"0",
"1"
] | Entry not found | 15 |
adamlin/ml999_metal_num | [
"0",
"1"
] | Entry not found | 15 |
adamlin/zero-shot-domain_cls | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
aditeyabaral/finetuned-iitp_pdt_review-roberta-hinglish-big | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
aditeyabaral/finetuned-iitp_pdt_review-roberta-hinglish-small | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
aditeyabaral/finetuned-iitp_pdt_review-xlm-roberta-base | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
ajrae/bert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5864941797290588
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8385
- Matthews Correlation: 0.5865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4887 | 1.0 | 535 | 0.5016 | 0.5107 |
| 0.286 | 2.0 | 1070 | 0.5473 | 0.5399 |
| 0.1864 | 3.0 | 1605 | 0.7114 | 0.5706 |
| 0.1163 | 4.0 | 2140 | 0.8385 | 0.5865 |
| 0.0834 | 5.0 | 2675 | 0.9610 | 0.5786 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,976 |
aloxatel/3RH | null | Entry not found | 15 |
aloxatel/9WT | null | Entry not found | 15 |
appleternity/bert-base-uncased-finetuned-coda19 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
aristotletan/roberta-base-finetuned-sst2 | [
"analogous event",
"appointment of receiver",
"assets",
"breach of obligations",
"cessation of business",
"composition and arrangement",
"creditor control",
"cross default",
"disposal",
"event or events",
"insolvency",
"invalidity",
"jeopardy",
"judgement",
"legal proceedings",
"misrep... | ---
license: mit
tags:
- generated_from_trainer
datasets:
- scim
metrics:
- accuracy
model_index:
- name: roberta-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: scim
type: scim
args: eod
metric:
name: Accuracy
type: accuracy
value: 0.9111111111111111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sst2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the scim dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4632
- Accuracy: 0.9111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 90 | 2.0273 | 0.6667 |
| No log | 2.0 | 180 | 0.8802 | 0.8556 |
| No log | 3.0 | 270 | 0.5908 | 0.8889 |
| No log | 4.0 | 360 | 0.4632 | 0.9111 |
| No log | 5.0 | 450 | 0.4294 | 0.9111 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| 1,811 |
aristotletan/scim-distilroberta | [
"Conditions Precedent",
"Conditions Subsequent",
"Conflict of Interest",
"Designated Accounts",
"Events of Default",
"Financial Covenant",
"Information Covenant",
"Negative Covenant",
"Positive Covenant",
"Rating",
"Utilisation of Proceeds"
] | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.