modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
Jeevesh8/lecun_feather_berts-70 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/lecun_feather_berts-61 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/lecun_feather_berts-59 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/lecun_feather_berts-60 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/lecun_feather_berts-25 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/lecun_feather_berts-23 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/lecun_feather_berts-32 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/lecun_feather_berts-94 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/lecun_feather_berts-74 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/lecun_feather_berts-87 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
bondi/bert-semaphore-prediction-w0 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
tags:
- generated_from_trainer
model-index:
- name: bert-semaphore-prediction-w0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-semaphore-prediction-w0
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| 935 |
bondi/bert-semaphore-prediction-w4 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
tags:
- generated_from_trainer
model-index:
- name: bert-semaphore-prediction-w4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-semaphore-prediction-w4
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| 935 |
bondi/bert-semaphore-prediction-w8 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
tags:
- generated_from_trainer
model-index:
- name: bert-semaphore-prediction-w8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-semaphore-prediction-w8
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| 935 |
sayakpramanik/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9228534433920637
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2166
- Accuracy: 0.923
- F1: 0.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8472 | 1.0 | 250 | 0.3169 | 0.912 | 0.9105 |
| 0.2475 | 2.0 | 500 | 0.2166 | 0.923 | 0.9229 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,804 |
bondi/bert-clean-semaphore-prediction-w2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-clean-semaphore-prediction-w2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-clean-semaphore-prediction-w2
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0685
- Accuracy: 0.9716
- F1: 0.9715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,204 |
DanielSM/1444Test | null | Entry not found | 15 |
clhuang/albert-news-classification | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
language:
- tw
tags:
- albert
- classification
license: afl-3.0
metrics:
- Accuracy
---
# Traditional Chinese news classification
繁體中文新聞分類任務,使用ckiplab/albert-base-chinese預訓練模型,資料集只有2.6萬筆,做為課程的範例模型。
from transformers import BertTokenizer, AlbertForSequenceClassification
model_path = "clhuang/albert-news-classification"
model = AlbertForSequenceClassification.from_pretrained(model_path)
tokenizer = BertTokenizer.from_pretrained("bert-base-chinese")
# Category index
news_categories=['政治','科技','運動','證卷','產經','娛樂','生活','國際','社會','文化','兩岸']
idx2cate = { i : item for i, item in enumerate(news_categories)}
# get category probability
def get_category_proba( text ):
max_length = 250
# prepare token sequence
inputs = tokenizer([text], padding=True, truncation=True, max_length=max_length, return_tensors="pt")
# perform inference
outputs = model(**inputs)
# get output probabilities by doing softmax
probs = outputs[0].softmax(1)
# executing argmax function to get the candidate label index
label_index = probs.argmax(dim=1)[0].tolist() # convert tensor to int
# get the label name
label = idx2cate[ label_index ]
# get the label probability
proba = round(float(probs.tolist()[0][label_index]),2)
response = {'label': label, 'proba': proba}
return response
get_category_proba('俄羅斯2月24日入侵烏克蘭至今不到3個月,芬蘭已準備好扭轉奉行了75年的軍事不結盟政策,申請加入北約。芬蘭總理馬林昨天表示,「希望我們下星期能與瑞典一起提出申請」。')
{'label': '國際', 'proba': 0.99} | 1,605 |
HrayrMSint/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9135483870967742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7771
- Accuracy: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2843 | 1.0 | 318 | 3.2793 | 0.7448 |
| 2.6208 | 2.0 | 636 | 1.8750 | 0.8297 |
| 1.5453 | 3.0 | 954 | 1.1565 | 0.8919 |
| 1.0141 | 4.0 | 1272 | 0.8628 | 0.9090 |
| 0.795 | 5.0 | 1590 | 0.7771 | 0.9135 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0
- Datasets 2.2.2
- Tokenizers 0.10.3
| 1,883 |
Jeevesh8/std_pnt_04_feather_berts-68 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-30 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-79 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-64 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-78 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-60 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-44 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-91 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-65 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-62 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-29 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-81 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-18 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-83 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-61 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-82 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-34 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-80 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-45 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-67 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-33 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-35 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-41 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-42 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-53 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-43 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-23 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-54 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-28 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-76 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-26 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-25 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-52 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-51 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-71 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-0 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-59 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-3 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-12 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-87 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-31 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-6 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-88 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-8 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-4 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-7 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-5 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-96 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-99 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-94 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-93 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-97 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-95 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
tauseefr84/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.838
- name: F1
type: f1
value: 0.822753081351476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5268
- Accuracy: 0.838
- F1: 0.8228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9225 | 1.0 | 250 | 0.5268 | 0.838 | 0.8228 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,732 |
course5i/SEAD-L-6_H-384_A-12-qqp | [
"0",
"1"
] | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- qqp
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-384_A-12-qqp
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **qqp** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_accuracy | eval_f1 | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:-------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.9126 | 0.8822 | 23.0122 | 1756.896 | 54.927 | 0.3389 | 40430 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
| 3,700 |
anvayS/reddit-aita-classifier | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: reddit-aita-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reddit-aita-classifier
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- Accuracy: 0.9497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5866 | 1.0 | 1250 | 0.5692 | 0.7247 |
| 0.5638 | 2.0 | 2500 | 0.4841 | 0.7813 |
| 0.4652 | 3.0 | 3750 | 0.2712 | 0.9077 |
| 0.3088 | 4.0 | 5000 | 0.1667 | 0.9497 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,582 |
orkg/orkgnlp-tdm-extraction | null | ---
license: mit
---
This Repository includes the files required to run the `TDM Extraction` ORKG-NLP service.
Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service. | 247 |
Alireza1044/mobilebert_mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8230268510984541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4595
- Accuracy: 0.8230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 48
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.3
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,394 |
olivia371/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9253731343283581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2348
- Accuracy: 0.925
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,507 |
Alireza1044/mobilebert_qqp | [
"duplicate",
"not_duplicate"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8988869651249073
- name: F1
type: f1
value: 0.8670050100852366
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qqp
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2458
- Accuracy: 0.8989
- F1: 0.8670
- Combined Score: 0.8829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.5
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,494 |
Alireza1044/mobilebert_QNLI | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9068277503203368
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3731
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,395 |
Alireza1044/mobilebert_rte | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6678700361010831
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rte
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8396
- Accuracy: 0.6679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,390 |
ali2066/sentence_bert-base-uncased-finetuned-SENTENCE | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: sentence_bert-base-uncased-finetuned-SENTENCE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence_bert-base-uncased-finetuned-SENTENCE
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4834
- Precision: 0.8079
- Recall: 1.0
- F1: 0.8938
- Accuracy: 0.8079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 13 | 0.3520 | 0.8889 | 1.0 | 0.9412 | 0.8889 |
| No log | 2.0 | 26 | 0.3761 | 0.8889 | 1.0 | 0.9412 | 0.8889 |
| No log | 3.0 | 39 | 0.3683 | 0.8889 | 1.0 | 0.9412 | 0.8889 |
| No log | 4.0 | 52 | 0.3767 | 0.8889 | 1.0 | 0.9412 | 0.8889 |
| No log | 5.0 | 65 | 0.3834 | 0.8889 | 1.0 | 0.9412 | 0.8889 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,915 |
johntang/finetuning-sentiment-model-3000-samples | [
"neg",
"pos"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8786885245901639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3426
- Accuracy: 0.8767
- F1: 0.8787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,515 |
S2312dal/M1_MLM_cross | [
"LABEL_0"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: M1_MLM_cross
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M1_MLM_cross
This model is a fine-tuned version of [S2312dal/M1_MLM](https://huggingface.co/S2312dal/M1_MLM) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0106
- Pearson: 0.9723
- Spearmanr: 0.9112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8.0
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0094 | 1.0 | 131 | 0.0342 | 0.9209 | 0.8739 |
| 0.0091 | 2.0 | 262 | 0.0157 | 0.9585 | 0.9040 |
| 0.0018 | 3.0 | 393 | 0.0106 | 0.9723 | 0.9112 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,585 |
Alireza1044/MobileBERT_Theseus-sts-b | [
"LABEL_0"
] | Entry not found | 15 |
Alireza1044/MobileBERT_Theseus-sst-2 | [
"negative",
"positive"
] | Entry not found | 15 |
scjones/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9315
- name: F1
type: f1
value: 0.9317528216385311
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Accuracy: 0.9315
- F1: 0.9318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2115 | 1.0 | 250 | 0.1696 | 0.93 | 0.9295 |
| 0.1376 | 2.0 | 500 | 0.1630 | 0.9315 | 0.9318 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,807 |
fouad-shammary/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9165
- name: F1
type: f1
value: 0.9164107076814402
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2349
- Accuracy: 0.9165
- F1: 0.9164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.837 | 1.0 | 250 | 0.3317 | 0.9015 | 0.8999 |
| 0.2563 | 2.0 | 500 | 0.2349 | 0.9165 | 0.9164 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,806 |
furyhawk/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.915483870967742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7788
- Accuracy: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2841 | 1.0 | 318 | 3.2794 | 0.7465 |
| 2.623 | 2.0 | 636 | 1.8719 | 0.8335 |
| 1.5474 | 3.0 | 954 | 1.1629 | 0.8929 |
| 1.014 | 4.0 | 1272 | 0.8621 | 0.9094 |
| 0.7987 | 5.0 | 1590 | 0.7788 | 0.9155 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,883 |
Mascariddu8/test-masca | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: test-masca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-masca
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,032 |
QuentinKemperino/ECHR_test_2_task_B | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- lex_glue
model-index:
- name: ECHR_test_2_task_B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ECHR_test_2_task_B
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the lex_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2092
- Macro-f1: 0.5250
- Micro-f1: 0.6190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2119 | 0.44 | 500 | 0.2945 | 0.2637 | 0.4453 |
| 0.1702 | 0.89 | 1000 | 0.2734 | 0.3246 | 0.4843 |
| 0.1736 | 1.33 | 1500 | 0.2633 | 0.3725 | 0.5133 |
| 0.1571 | 1.78 | 2000 | 0.2549 | 0.3942 | 0.5417 |
| 0.1476 | 2.22 | 2500 | 0.2348 | 0.4187 | 0.5649 |
| 0.1599 | 2.67 | 3000 | 0.2427 | 0.4286 | 0.5606 |
| 0.1481 | 3.11 | 3500 | 0.2210 | 0.4664 | 0.5780 |
| 0.1412 | 3.56 | 4000 | 0.2542 | 0.4362 | 0.5617 |
| 0.1505 | 4.0 | 4500 | 0.2249 | 0.4728 | 0.5863 |
| 0.1425 | 4.44 | 5000 | 0.2311 | 0.4576 | 0.5845 |
| 0.1461 | 4.89 | 5500 | 0.2261 | 0.4590 | 0.5832 |
| 0.1451 | 5.33 | 6000 | 0.2248 | 0.4738 | 0.5901 |
| 0.1281 | 5.78 | 6500 | 0.2317 | 0.4641 | 0.5896 |
| 0.1354 | 6.22 | 7000 | 0.2366 | 0.4639 | 0.5946 |
| 0.1204 | 6.67 | 7500 | 0.2311 | 0.4875 | 0.5877 |
| 0.1229 | 7.11 | 8000 | 0.2083 | 0.4815 | 0.6020 |
| 0.1368 | 7.56 | 8500 | 0.2170 | 0.5213 | 0.6021 |
| 0.1288 | 8.0 | 9000 | 0.2136 | 0.5336 | 0.6176 |
| 0.1275 | 8.44 | 9500 | 0.2180 | 0.5204 | 0.6082 |
| 0.1232 | 8.89 | 10000 | 0.2147 | 0.5334 | 0.6083 |
| 0.1319 | 9.33 | 10500 | 0.2121 | 0.5312 | 0.6186 |
| 0.1267 | 9.78 | 11000 | 0.2092 | 0.5250 | 0.6190 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 3,010 |
Elron/deberta-v3-large-hate | [
"0",
"1"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large
results: []
---
# deberta-v3-large-sentiment
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Model description
Test set results:
| Model | Emotion | Hate | Irony | Offensive | Sentiment |
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
| deberta-v3-large | **86.3** | **61.3** | **87.1** | **86.4** | **73.9** |
| BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 |
| RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 |
[source:papers_with_code](https://paperswithcode.com/sota/sentiment-analysis-on-tweeteval)
## Intended uses & limitations
Classifying attributes of interest on tweeter like data.
## Training and evaluation data
[tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Training procedure
Fine tuned and evaluated with [run_glue.py]()
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6362 | 0.18 | 100 | 0.5481 | 0.7197 |
| 0.4264 | 0.36 | 200 | 0.4550 | 0.8008 |
| 0.4174 | 0.53 | 300 | 0.4524 | 0.7868 |
| 0.4197 | 0.71 | 400 | 0.4586 | 0.7918 |
| 0.3819 | 0.89 | 500 | 0.4368 | 0.8078 |
| 0.3558 | 1.07 | 600 | 0.4525 | 0.8068 |
| 0.2982 | 1.24 | 700 | 0.4999 | 0.7928 |
| 0.2885 | 1.42 | 800 | 0.5129 | 0.8108 |
| 0.253 | 1.6 | 900 | 0.5873 | 0.8208 |
| 0.3354 | 1.78 | 1000 | 0.4244 | 0.8178 |
| 0.3083 | 1.95 | 1100 | 0.4853 | 0.8058 |
| 0.2301 | 2.13 | 1200 | 0.7209 | 0.8018 |
| 0.2167 | 2.31 | 1300 | 0.8090 | 0.7778 |
| 0.1863 | 2.49 | 1400 | 0.6812 | 0.8038 |
| 0.2181 | 2.66 | 1500 | 0.6958 | 0.8138 |
| 0.2159 | 2.84 | 1600 | 0.6315 | 0.8118 |
| 0.1828 | 3.02 | 1700 | 0.7173 | 0.8138 |
| 0.1287 | 3.2 | 1800 | 0.9081 | 0.8018 |
| 0.1711 | 3.37 | 1900 | 0.8858 | 0.8068 |
| 0.1598 | 3.55 | 2000 | 0.7878 | 0.8028 |
| 0.1467 | 3.73 | 2100 | 0.9003 | 0.7948 |
| 0.127 | 3.91 | 2200 | 0.9066 | 0.8048 |
| 0.1134 | 4.09 | 2300 | 0.9646 | 0.8118 |
| 0.1017 | 4.26 | 2400 | 0.9778 | 0.8048 |
| 0.085 | 4.44 | 2500 | 1.0529 | 0.8088 |
| 0.0996 | 4.62 | 2600 | 1.0082 | 0.8058 |
| 0.1054 | 4.8 | 2700 | 0.9698 | 0.8108 |
| 0.1375 | 4.97 | 2800 | 0.9334 | 0.8048 |
| 0.0487 | 5.15 | 2900 | 1.1273 | 0.8108 |
| 0.0611 | 5.33 | 3000 | 1.1528 | 0.8058 |
| 0.0668 | 5.51 | 3100 | 1.0148 | 0.8118 |
| 0.0582 | 5.68 | 3200 | 1.1333 | 0.8108 |
| 0.0869 | 5.86 | 3300 | 1.0607 | 0.8088 |
| 0.0623 | 6.04 | 3400 | 1.1880 | 0.8068 |
| 0.0317 | 6.22 | 3500 | 1.2836 | 0.8008 |
| 0.0546 | 6.39 | 3600 | 1.2148 | 0.8058 |
| 0.0486 | 6.57 | 3700 | 1.3348 | 0.8008 |
| 0.0332 | 6.75 | 3800 | 1.3734 | 0.8018 |
| 0.051 | 6.93 | 3900 | 1.2966 | 0.7978 |
| 0.0217 | 7.1 | 4000 | 1.3853 | 0.8048 |
| 0.0109 | 7.28 | 4100 | 1.4803 | 0.8068 |
| 0.0345 | 7.46 | 4200 | 1.4906 | 0.7998 |
| 0.0365 | 7.64 | 4300 | 1.4347 | 0.8028 |
| 0.0265 | 7.82 | 4400 | 1.3977 | 0.8128 |
| 0.0257 | 7.99 | 4500 | 1.3705 | 0.8108 |
| 0.0036 | 8.17 | 4600 | 1.4353 | 0.8168 |
| 0.0269 | 8.35 | 4700 | 1.4826 | 0.8068 |
| 0.0231 | 8.53 | 4800 | 1.4811 | 0.8118 |
| 0.0204 | 8.7 | 4900 | 1.5245 | 0.8028 |
| 0.0263 | 8.88 | 5000 | 1.5123 | 0.8018 |
| 0.0138 | 9.06 | 5100 | 1.5113 | 0.8028 |
| 0.0089 | 9.24 | 5200 | 1.5846 | 0.7978 |
| 0.029 | 9.41 | 5300 | 1.5362 | 0.8008 |
| 0.0058 | 9.59 | 5400 | 1.5759 | 0.8018 |
| 0.0084 | 9.77 | 5500 | 1.5679 | 0.8018 |
| 0.0065 | 9.95 | 5600 | 1.5683 | 0.8028 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
| 5,332 |
Elron/deberta-v3-large-irony | [
"0",
"1"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large
results: []
---
# deberta-v3-large-irony
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Model description
Test set results:
| Model | Emotion | Hate | Irony | Offensive | Sentiment |
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
| deberta-v3-large | **86.3** | **61.3** | **87.1** | **86.4** | **73.9** |
| BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 |
| RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 |
[source:papers_with_code](https://paperswithcode.com/sota/sentiment-analysis-on-tweeteval)
## Intended uses & limitations
Classifying attributes of interest on tweeter like data.
## Training and evaluation data
[tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Training procedure
Fine tuned and evaluated with [run_glue.py]()
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10.0
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6478 | 1.12 | 100 | 0.5890 | 0.7529 |
| 0.5013 | 2.25 | 200 | 0.5873 | 0.7707 |
| 0.388 | 3.37 | 300 | 0.6993 | 0.7602 |
| 0.3169 | 4.49 | 400 | 0.6773 | 0.7874 |
| 0.2693 | 5.61 | 500 | 0.7172 | 0.7707 |
| 0.2396 | 6.74 | 600 | 0.7397 | 0.7801 |
| 0.2284 | 7.86 | 700 | 0.8096 | 0.7550 |
| 0.2207 | 8.98 | 800 | 0.7827 | 0.7654 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
| 2,445 |
Elron/deberta-v3-large-offensive | [
"0",
"1"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large
results: []
---
# deberta-v3-large-sentiment
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Model description
Test set results:
| Model | Emotion | Hate | Irony | Offensive | Sentiment |
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
| deberta-v3-large | **86.3** | **61.3** | **87.1** | **86.4** | **73.9** |
| BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 |
| RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 |
[source:papers_with_code](https://paperswithcode.com/sota/sentiment-analysis-on-tweeteval)
## Intended uses & limitations
Classifying attributes of interest on tweeter like data.
## Training and evaluation data
[tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset.
## Training procedure
Fine tuned and evaluated with [run_glue.py]()
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6417 | 0.27 | 100 | 0.6283 | 0.6533 |
| 0.5105 | 0.54 | 200 | 0.4588 | 0.7915 |
| 0.4554 | 0.81 | 300 | 0.4500 | 0.7968 |
| 0.4212 | 1.08 | 400 | 0.4773 | 0.7938 |
| 0.4054 | 1.34 | 500 | 0.4311 | 0.7983 |
| 0.3922 | 1.61 | 600 | 0.4588 | 0.7998 |
| 0.3776 | 1.88 | 700 | 0.4367 | 0.8066 |
| 0.3535 | 2.15 | 800 | 0.4675 | 0.8074 |
| 0.33 | 2.42 | 900 | 0.4874 | 0.8021 |
| 0.3113 | 2.69 | 1000 | 0.4949 | 0.8044 |
| 0.3203 | 2.96 | 1100 | 0.4550 | 0.8059 |
| 0.248 | 3.23 | 1200 | 0.4858 | 0.8036 |
| 0.2478 | 3.49 | 1300 | 0.5299 | 0.8029 |
| 0.2371 | 3.76 | 1400 | 0.5013 | 0.7991 |
| 0.2388 | 4.03 | 1500 | 0.5520 | 0.8021 |
| 0.1744 | 4.3 | 1600 | 0.6687 | 0.7915 |
| 0.1788 | 4.57 | 1700 | 0.7560 | 0.7689 |
| 0.1652 | 4.84 | 1800 | 0.6985 | 0.7832 |
| 0.1596 | 5.11 | 1900 | 0.7191 | 0.7915 |
| 0.1214 | 5.38 | 2000 | 0.9097 | 0.7893 |
| 0.1432 | 5.64 | 2100 | 0.9184 | 0.7787 |
| 0.1145 | 5.91 | 2200 | 0.9620 | 0.7878 |
| 0.1069 | 6.18 | 2300 | 0.9489 | 0.7893 |
| 0.1012 | 6.45 | 2400 | 1.0107 | 0.7817 |
| 0.0942 | 6.72 | 2500 | 1.0021 | 0.7885 |
| 0.087 | 6.99 | 2600 | 1.1090 | 0.7915 |
| 0.0598 | 7.26 | 2700 | 1.1735 | 0.7795 |
| 0.0742 | 7.53 | 2800 | 1.1433 | 0.7817 |
| 0.073 | 7.79 | 2900 | 1.1343 | 0.7953 |
| 0.0553 | 8.06 | 3000 | 1.2258 | 0.7840 |
| 0.0474 | 8.33 | 3100 | 1.2461 | 0.7817 |
| 0.0515 | 8.6 | 3200 | 1.2996 | 0.7825 |
| 0.0551 | 8.87 | 3300 | 1.2819 | 0.7855 |
| 0.0541 | 9.14 | 3400 | 1.2808 | 0.7855 |
| 0.0465 | 9.41 | 3500 | 1.3398 | 0.7817 |
| 0.0407 | 9.68 | 3600 | 1.3231 | 0.7825 |
| 0.0343 | 9.94 | 3700 | 1.3330 | 0.7825 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
| 4,216 |
cjbarrie/autotrain-atc2 | [
"0",
"1"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- cjbarrie/autotrain-data-traintest-sentiment-split
co2_eq_emissions: 3.1566482249518177
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1024534825
- CO2 Emissions (in grams): 3.1566482249518177
## Validation Metrics
- Loss: 0.5167999267578125
- Accuracy: 0.7523809523809524
- Precision: 0.7377049180327869
- Recall: 0.5555555555555556
- AUC: 0.8142525600535937
- F1: 0.6338028169014086
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/cjbarrie/autotrain-traintest-sentiment-split-1024534825
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("cjbarrie/autotrain-traintest-sentiment-split-1024534825", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("cjbarrie/autotrain-traintest-sentiment-split-1024534825", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,245 |
domenicrosati/BioM-ALBERT-xxlarge-finetuned-DAGPap22 | null | ---
tags:
- text-classification
- generated_from_trainer
model-index:
- name: BioM-ALBERT-xxlarge-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioM-ALBERT-xxlarge-finetuned-DAGPap22
This model is a fine-tuned version of [sultan/BioM-ALBERT-xxlarge](https://huggingface.co/sultan/BioM-ALBERT-xxlarge) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,124 |
deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513 | [
"negative",
"positive"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-bert_wikipedia_sst_2
co2_eq_emissions: 16.686945384446037
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1034235513
- CO2 Emissions (in grams): 16.686945384446037
## Validation Metrics
- Loss: 0.14450643956661224
- Accuracy: 0.9527839643652561
- Precision: 0.9565852363250132
- Recall: 0.9588767633750332
- AUC: 0.9872179498202862
- F1: 0.9577296291373122
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,231 |
Parsa/Drug_Induced_Liver_Injury_classification | null | Entry not found | 15 |
deepesh0x/autotrain-glue1-1046836019 | [
"False",
"True"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-glue1
co2_eq_emissions: 3.869994913020229
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1046836019
- CO2 Emissions (in grams): 3.869994913020229
## Validation Metrics
- Loss: 0.626447856426239
- Accuracy: 0.6606574761399788
- Precision: 0.6925845932325414
- Recall: 0.8187234042553192
- AUC: 0.656404823892031
- F1: 0.750390015600624
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-glue1-1046836019
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-glue1-1046836019", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-glue1-1046836019", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,165 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.