modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
crcb/imp_hatred | [
"0",
"1",
"2"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- crcb/autotrain-data-imp_hs
co2_eq_emissions: 15.91710539314839
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 753423062
- CO2 Emissions (in grams): 15.91710539314839
## Validation Metrics
- Loss: 0.5205655694007874
- Accuracy: 0.7746741154562383
- Macro F1: 0.5796696218586866
- Micro F1: 0.7746741154562382
- Weighted F1: 0.7602379277947592
- Macro Precision: 0.6976905233970596
- Micro Precision: 0.7746741154562383
- Weighted Precision: 0.7628815999440115
- Macro Recall: 0.557144871405371
- Micro Recall: 0.7746741154562383
- Weighted Recall: 0.7746741154562383
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-imp_hs-753423062
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-imp_hs-753423062", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-imp_hs-753423062", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,361 |
rabiaqayyum/autotrain-mental-health-analysis-752423172 | [
"Anxiety",
"BPD",
"autism",
"bipolar",
"depression",
"mentalhealth",
"schizophrenia"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- rabiaqayyum/autotrain-data-mental-health-analysis
co2_eq_emissions: 313.3534743349287
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 752423172
- CO2 Emissions (in grams): 313.3534743349287
## Validation Metrics
- Loss: 0.6064515113830566
- Accuracy: 0.805171240644137
- Macro F1: 0.7253473044054398
- Micro F1: 0.805171240644137
- Weighted F1: 0.7970679970423672
- Macro Precision: 0.7477679873153633
- Micro Precision: 0.805171240644137
- Weighted Precision: 0.7966263131173029
- Macro Recall: 0.7143231260991618
- Micro Recall: 0.805171240644137
- Weighted Recall: 0.805171240644137
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/rabiaqayyum/autotrain-mental-health-analysis-752423172
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rabiaqayyum/autotrain-mental-health-analysis-752423172", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rabiaqayyum/autotrain-mental-health-analysis-752423172", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,449 |
afbudiman/distilled-optimized-indobert-classification | [
"negative",
"neutral",
"positive"
] | ---
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- f1
model-index:
- name: distilled-optimized-indobert-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9
- name: F1
type: f1
value: 0.8994069293432798
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-optimized-indobert-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7397
- Accuracy: 0.9
- F1: 0.8994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.315104717136378e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.128 | 1.0 | 688 | 0.8535 | 0.8913 | 0.8917 |
| 0.1475 | 2.0 | 1376 | 0.9171 | 0.8913 | 0.8913 |
| 0.0997 | 3.0 | 2064 | 0.7799 | 0.8960 | 0.8951 |
| 0.0791 | 4.0 | 2752 | 0.7179 | 0.9032 | 0.9023 |
| 0.0577 | 5.0 | 3440 | 0.6908 | 0.9063 | 0.9055 |
| 0.0406 | 6.0 | 4128 | 0.7613 | 0.8992 | 0.8986 |
| 0.0275 | 7.0 | 4816 | 0.7502 | 0.8992 | 0.8989 |
| 0.023 | 8.0 | 5504 | 0.7408 | 0.8976 | 0.8969 |
| 0.0169 | 9.0 | 6192 | 0.7397 | 0.9 | 0.8994 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 2,294 |
demoversion/bert-fa-base-uncased-haddad-wikinli | [
"contradiction",
"entailment"
] | ---
language: fa
license: apache-2.0
---
This repository is created with the aim to provide better models for NLI in persian, with the transparent codes for training I hope you guys find it inspiring and build better model in the future. for more details about the task and methods used for training check the [medium post](https://haddadhesam.medium.com/) and notebooks.
# Dataset
The dataset used for training is Wiki D/Similar dataset (wiki-d-similar.zip), obtained from [Sentence Transformers](https://github.com/m3hrdadfi/sentence-transformers) repository.
# Model
The proposed model is published at HuggingFace Hub with the name of ``demoversion/bert-fa-base-uncased-haddad-wikinli``. You can download and use the model from [HuggingFace Website](https://huggingface.co/demoversion/bert-fa-base-uncased-haddad-wikinli) or directly in transformers library like this:
from transformers import pipeline
model = pipeline("zero-shot-classification", model="demoversion/bert-fa-base-uncased-haddad-wikinli")
labels = ["ورزشی",
"سیاسی",
"علمی",
"فرهنگی"]
template_str = "این یک متن {} است."
str_sentence = "مرحله مقدماتی جام جهانی حاشیههای زیادی داشت."
model(str_sentence, labels, hypothesis_template=template_str)
The result of this code snippet is:
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
{'labels': ['فرهنگی', 'علمی', 'سیاسی', 'ورزشی'],
'scores': [0.25921085476875305,
0.25713297724723816,
0.24884170293807983,
0.23481446504592896],
'sequence': 'مرحله مقدماتی جام جهانی حاشیه\u200cهای زیادی داشت.'}
Yep, the right label (highest score) without training.
# Results
The result comparing to the original model published for this dataset is available in the table bellow.
|Model|dev_accuracy| dev_f1|test_accuracy|test_f1|
|--|--|--|--|--|
|[m3hrdadfi/bert-fa-base-uncased-wikinli](https://huggingface.co/m3hrdadfi/bert-fa-base-uncased-wikinli)|77.88|77.57|76.64|75.99|
|[demoversion/bert-fa-base-uncased-haddad-wikinli](https://huggingface.co/demoversion/bert-fa-base-uncased-haddad-wikinli)|**78.62**|**79.74**|**77.04**|**78.56**|
# Notebooks
Notebooks used for training and evaluation are available below.
[Training ](https://colab.research.google.com/github/DemoVersion/persian-nli-trainer/blob/main/notebooks/training.ipynb)
[Evaluation ](https://colab.research.google.com/github/DemoVersion/persian-nli-trainer/blob/main/notebooks/evaluation.ipynb)
| 2,698 |
mwong/climatebert-base-f-fever-evidence-related | null | ---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-evidence-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# FeverBert-related
FeverBert-related is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 91.23% with test dataset "mwong/fever-evidence-related". Using pretrained ClimateBert-f model, the classifier head is trained on Fever dataset. | 1,090 |
Jeevesh8/feather_berts_19 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
AntoineB/roberta-tiny-imdb | null | Entry not found | 15 |
okho0653/distilbert-base-uncased-zero-shot-sentiment-model | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-zero-shot-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-zero-shot-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,081 |
dapang/distilroberta-base-mic-sym | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mic-sym
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mic-sym
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0023
- Accuracy: 0.9997
- F1: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.740146306575944e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 188 | 0.0049 | 0.9990 | 0.9990 |
| No log | 2.0 | 376 | 0.0023 | 0.9997 | 0.9997 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0.dev20220422+cu116
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,494 |
Danni/distilbert-base-uncased-finetuned-dbpedia | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-dbpedia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-dbpedia
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4338
- eval_matthews_correlation: 0.7817
- eval_runtime: 1094.9103
- eval_samples_per_second: 60.777
- eval_steps_per_second: 3.799
- epoch: 1.0
- step: 23568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,290 |
IneG/glue_sst_classifier | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,993 |
UT/BMW_DEBIAS | null | Entry not found | 15 |
UT/PARSBRT_DEBIAS | null | Entry not found | 15 |
Ansh/my_bert | null | ---
license: afl-3.0
---
| 25 |
chiragasarpota/scotus-bert | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
---
| 28 |
TehranNLP-org/bert-large-hateXplain | [
"hatespeech",
"normal",
"offensive"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: HATEXPLAIN
type: ''
args: hatexplain
metrics:
- name: Accuracy
type: accuracy
value: 0.40790842872008326
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the HATEXPLAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7731
- Accuracy: 0.4079
- Accuracy 0: 0.8027
- Accuracy 1: 0.1869
- Accuracy 2: 0.2956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: not_parallel
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Accuracy 0 | Accuracy 1 | Accuracy 2 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:----------:|:----------:|
| No log | 1.0 | 480 | 0.8029 | 0.4235 | 0.7589 | 0.0461 | 0.5985 |
| No log | 2.0 | 960 | 0.7574 | 0.4011 | 0.7470 | 0.1831 | 0.3376 |
| No log | 3.0 | 1440 | 0.7731 | 0.4079 | 0.8027 | 0.1869 | 0.2956 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
| 2,073 |
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7680
- Precision: 0.9838
- Recall: 0.6632
- F1: 0.7923
- Accuracy: 0.6624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 130 | 0.2980 | 0.9315 | 0.9533 | 0.9423 | 0.9081 |
| No log | 2.0 | 260 | 0.2053 | 0.9537 | 0.9626 | 0.9581 | 0.9338 |
| No log | 3.0 | 390 | 0.1873 | 0.9464 | 0.9907 | 0.9680 | 0.9485 |
| 0.3064 | 4.0 | 520 | 0.1811 | 0.9585 | 0.9720 | 0.9652 | 0.9449 |
| 0.3064 | 5.0 | 650 | 0.1887 | 0.9587 | 0.9766 | 0.9676 | 0.9485 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,991 |
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_french_second_train_set_french_False | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: _ctxSentence_TRAIN_all_TEST_french_second_train_set_french_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# _ctxSentence_TRAIN_all_TEST_french_second_train_set_french_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4936
- Precision: 0.8189
- Recall: 0.9811
- F1: 0.8927
- Accuracy: 0.8120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 13 | 0.5150 | 0.7447 | 1.0 | 0.8537 | 0.7447 |
| No log | 2.0 | 26 | 0.5565 | 0.7447 | 1.0 | 0.8537 | 0.7447 |
| No log | 3.0 | 39 | 0.5438 | 0.7778 | 1.0 | 0.8750 | 0.7872 |
| No log | 4.0 | 52 | 0.5495 | 0.7778 | 1.0 | 0.8750 | 0.7872 |
| No log | 5.0 | 65 | 0.5936 | 0.7778 | 1.0 | 0.8750 | 0.7872 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,963 |
ml4pubmed/scibert-scivocab-uncased_pub_section | [
"BACKGROUND",
"CONCLUSIONS",
"METHODS",
"OBJECTIVE",
"RESULTS"
] | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
tags:
- text-classification
- document sections
- sentence classification
- document classification
- medical
- health
- biomedical
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# scibert-scivocab-uncased_pub_section
- original model file name: textclassifer_scibert_scivocab_uncased_pubmed_full
- This is a fine-tuned checkpoint of `allenai/scibert_scivocab_uncased` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## usage in python
install transformers as needed: `pip install -U transformers`
run the following, changing the example text to your use case:
```
from transformers import pipeline
model_tag = "ml4pubmed/scibert-scivocab-uncased_pub_section"
classifier = pipeline(
'text-classification',
model=model_tag,
)
prompt = """
Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
"""
classifier(
prompt,
) # classify the sentence
```
## metadata
### training_metrics
- date_run: Apr-25-2022_t-03
- huggingface_tag: allenai/scibert_scivocab_uncased
### training_parameters
- date_run: Apr-25-2022_t-03
- huggingface_tag: allenai/scibert_scivocab_uncased
| 2,542 |
guhuawuli/distilbert-imdb | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 391 | 0.1846 | 0.9288 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,244 |
hidude562/Wiki-Complexity | [
"0.0",
"1.0"
] | ---
tags: autotrain
language: en
widget:
- text: "I quite enjoy using AutoTrain due to its simplicity."
datasets:
- hidude562/autotrain-data-SimpleDetect
co2_eq_emissions: 0.21691606119445225
---
# Model Description
This model detects if you are writing in a format that is more similar to Simple English Wikipedia or English Wikipedia. This can be extended to applications that aren't Wikipedia as well and to some extent, it can be used for other languages.
Please also note there is a major bias to special characters (Mainly the hyphen mark, but it also applies to others) so I would recommend removing them from your input text.
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 837726721
- CO2 Emissions (in grams): 0.21691606119445225
## Validation Metrics
- Loss: 0.010096958838403225
- Accuracy: 0.996223414828066
- Macro F1: 0.996179398826373
- Micro F1: 0.996223414828066
- Weighted F1: 0.996223414828066
- Macro Precision: 0.996179398826373
- Micro Precision: 0.996223414828066
- Weighted Precision: 0.996223414828066
- Macro Recall: 0.996179398826373
- Micro Recall: 0.996223414828066
- Weighted Recall: 0.996223414828066
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I quite enjoy using AutoTrain due to its simplicity."}' https://api-inference.huggingface.co/models/hidude562/Wiki-Complexity
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hidude562/Wiki-Complexity", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hidude562/Wiki-Complexity", use_auth_token=True)
inputs = tokenizer("I quite enjoy using AutoTrain due to its simplicity.", return_tensors="pt")
outputs = model(**inputs)
``` | 1,896 |
binay1999/bert-finetuned-text-classification | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-49 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-55 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-70 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-73 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-97 | null | Entry not found | 15 |
ysharma/distilbert-base-uncased-finetuned-emotions | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9331148494056558
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1579
- Acc: 0.933
- F1: 0.9331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.1723 | 1.0 | 250 | 0.1838 | 0.9315 | 0.9312 |
| 0.1102 | 2.0 | 500 | 0.1579 | 0.933 | 0.9331 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,721 |
binay1999/ditilbert-finetuned-text-classification | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-1 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-2 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-55 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-59 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-77 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-81 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-86 | null | Entry not found | 15 |
maazmikail/finetuning-sentiment-model-urdu-roberta | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-urdu-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-urdu-roberta
This model is a fine-tuned version of [urduhack/roberta-urdu-small](https://huggingface.co/urduhack/roberta-urdu-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,089 |
Suhong/distilbert-base-uncased-emotion-climateChange | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-emotion-climateChange
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-emotion-climateChange
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7189
- Accuracy: 0.8416
- F1: 0.7735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 23 | 0.9234 | 0.8416 | 0.7735 |
| No log | 2.0 | 46 | 0.7189 | 0.8416 | 0.7735 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,507 |
anuj55/all-MiniLM-L6-v2-finetuned-polifact | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-79 | null | Entry not found | 15 |
ankitkupadhyay/outputs | [
"LABEL_0"
] | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0224
- Pearson: 0.8314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 214 | 0.0256 | 0.7816 |
| No log | 2.0 | 428 | 0.0251 | 0.8115 |
| 0.0355 | 3.0 | 642 | 0.0257 | 0.8186 |
| 0.0355 | 4.0 | 856 | 0.0220 | 0.8255 |
| 0.0133 | 5.0 | 1070 | 0.0226 | 0.8287 |
| 0.0133 | 6.0 | 1284 | 0.0220 | 0.8321 |
| 0.0133 | 7.0 | 1498 | 0.0224 | 0.8314 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,733 |
connectivity/feather_berts_23 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_24 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_99 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/bert_ft_qqp-0 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-2 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-3 | null | Entry not found | 15 |
connectivity/cola_6ep_ft-39 | null | Entry not found | 15 |
connectivity/cola_6ep_ft-40 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-85 | null | Entry not found | 15 |
YeRyeongLee/electra-base-discriminator-finetuned-removed-0530 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electra-base-discriminator-finetuned-removed-0530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-removed-0530
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9713
- Accuracy: 0.8824
- F1: 0.8824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.6265 | 0.8107 | 0.8128 |
| No log | 2.0 | 6360 | 0.5158 | 0.8544 | 0.8541 |
| No log | 3.0 | 9540 | 0.6686 | 0.8563 | 0.8567 |
| No log | 4.0 | 12720 | 0.6491 | 0.8711 | 0.8709 |
| No log | 5.0 | 15900 | 0.8048 | 0.8660 | 0.8672 |
| No log | 6.0 | 19080 | 0.8110 | 0.8708 | 0.8710 |
| No log | 7.0 | 22260 | 1.0082 | 0.8651 | 0.8640 |
| 0.2976 | 8.0 | 25440 | 0.8343 | 0.8811 | 0.8814 |
| 0.2976 | 9.0 | 28620 | 0.9366 | 0.8780 | 0.8780 |
| 0.2976 | 10.0 | 31800 | 0.9713 | 0.8824 | 0.8824 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
| 2,147 |
dexay/reDs | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | Entry not found | 15 |
ShoneRan/bert-emotion | [
"anger",
"joy",
"optimism",
"sadness"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- precision
- recall
model-index:
- name: bert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: Precision
type: precision
value: 0.7262254187805659
- name: Recall
type: recall
value: 0.725549671319356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1670
- Precision: 0.7262
- Recall: 0.7255
- Fscore: 0.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8561 | 1.0 | 815 | 0.7844 | 0.7575 | 0.6081 | 0.6253 |
| 0.5337 | 2.0 | 1630 | 0.9080 | 0.7567 | 0.7236 | 0.7325 |
| 0.2573 | 3.0 | 2445 | 1.1670 | 0.7262 | 0.7255 | 0.7253 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,923 |
Jeevesh8/lecun_feather_berts-47 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/lecun_feather_berts-11 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
anvay/finetuning-cardiffnlp-sentiment-model | [
"Negative",
"Neutral",
"Positive"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-cardiffnlp-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-cardiffnlp-sentiment-model
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2685
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,224 |
echarlaix/distilbert-sst2-inc-dynamic-quantization-magnitude-pruning-0.1 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
---
| 28 |
Jatin-WIAI/malayalam_relevance_clf | null | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-20 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-47 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-66 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Willy/bert-base-spanish-wwm-cased-finetuned-emotion | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-emotion
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5558
- Accuracy: 0.7630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5414 | 1.0 | 67 | 0.5677 | 0.7481 |
| 0.5482 | 2.0 | 134 | 0.5558 | 0.7630 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,464 |
dibsondivya/ernie-phmtweets-sutd | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
tags:
- ernie
- health
- tweet
datasets:
- custom-phm-tweets
metrics:
- accuracy
model-index:
- name: ernie-phmtweets-sutd
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: custom-phm-tweets
type: labelled
metrics:
- name: Accuracy
type: accuracy
value: 0.885
---
# ernie-phmtweets-sutd
This model is a fine-tuned version of [ernie-2.0-en](https://huggingface.co/nghuyong/ernie-2.0-en) for text classification to identify public health events through tweets. The project was based on an [Emory University Study on Detection of Personal Health Mentions in Social Media paper](https://arxiv.org/pdf/1802.09130v2.pdf), that worked with this [custom dataset](https://github.com/emory-irlab/PHM2017).
It achieves the following results on the evaluation set:
- Accuracy: 0.885
## Usage
```Python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("dibsondivya/ernie-phmtweets-sutd")
model = AutoModelForSequenceClassification.from_pretrained("dibsondivya/ernie-phmtweets-sutd")
```
### Model Evaluation Results
With Validation Set
- Accuracy: 0.889763779527559
With Test Set
- Accuracy: 0.884643644379133
## References for ERNIE 2.0 Model
```bibtex
@article{sun2019ernie20,
title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:1907.12412},
year={2019}
}
``` | 1,594 |
EventMiner/xlm-roberta-large-en-doc | null | ---
language: multilingual
tags:
- news event detection
- document level
- EventMiner
license: apache-2.0
---
# EventMiner
EventMiner is designed for multilingual news event detection. The goal of news event detection is the automatic extraction of event details from news articles. This event extraction can be done at different levels: document, sentence and word ranging from coarse-granular information to fine-granular information.
We submitted the best results based on EventMiner to [CASE 2021 shared task 1: *Multilingual Protest News Detection*](https://competitions.codalab.org/competitions/31247). Our approach won first place in English for the document level task while ranking within the top four solutions for other languages: Portuguese, Spanish, and Hindi.
*EventMiner/xlm-roberta-large-en-doc* is an xlm-roberta-large sequence classification model fine-tuned on English document level data of the multilingual version of GLOCON gold standard dataset released with [CASE 2021](https://aclanthology.org/2021.case-1.11/). <br>
Labels:
- Label_0: News article does not contain information about a past or ongoing socio-political event
- Label_1: News article contains information about a past or ongoing socio-political event
More details about the training procedure are available with our [codebase](https://github.com/HHansi/EventMiner).
# How to Use
## Load Model
```python
from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
model_name = 'EventMiner/xlm-roberta-large-en-doc'
tokenizer = XLMRobertaTokenizer.from_pretrained(model_name)
model = XLMRobertaForSequenceClassification.from_pretrained(model_name)
```
## Classification
```python
from transformers import pipeline
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
classifier("Police arrested five more student leaders on Monday when implementing the strike call given by MSU students union as a mark of protest against the decision to introduce payment seats in first-year commerce programme.")
```
# Citation
If you use this model, please consider citing the following paper.
```
@inproceedings{hettiarachchi-etal-2021-daai,
title = "{DAAI} at {CASE} 2021 Task 1: Transformer-based Multilingual Socio-political and Crisis Event Detection",
author = "Hettiarachchi, Hansi and
Adedoyin-Olowe, Mariam and
Bhogal, Jagdev and
Gaber, Mohamed Medhat",
booktitle = "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.case-1.16",
doi = "10.18653/v1/2021.case-1.16",
pages = "120--130",
}
``` | 2,822 |
deepesh0x/autotrain-GlueModels-1010733562 | [
"negative",
"positive"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-GlueModels
co2_eq_emissions: 60.24263131580023
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1010733562
- CO2 Emissions (in grams): 60.24263131580023
## Validation Metrics
- Loss: 0.1812974065542221
- Accuracy: 0.9252564102564103
- Precision: 0.9409888357256778
- Recall: 0.9074596257369905
- AUC: 0.9809618001947271
- F1: 0.923920135717082
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-GlueModels-1010733562
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-GlueModels-1010733562", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-GlueModels-1010733562", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,186 |
Adapting/dialog_sentiment_classifier | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | colab used to train this model: https://colab.research.google.com/drive/1txlzTh9bdAHVSt229Nbip6dtkYvDbWFj?usp=sharing | 117 |
amandaraeb/qs | null | Entry not found | 15 |
cjbarrie/autotrain-atc | [
"0",
"1"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- cjbarrie/autotrain-data-traintest-sentiment-split
co2_eq_emissions: 2.288443953210163
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1024534822
- CO2 Emissions (in grams): 2.288443953210163
## Validation Metrics
- Loss: 0.5510443449020386
- Accuracy: 0.7619047619047619
- Precision: 0.6761363636363636
- Recall: 0.7345679012345679
- AUC: 0.7936883912336109
- F1: 0.7041420118343196
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/cjbarrie/autotrain-traintest-sentiment-split-1024534822
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("cjbarrie/autotrain-traintest-sentiment-split-1024534822", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("cjbarrie/autotrain-traintest-sentiment-split-1024534822", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,243 |
cestwc/roberta-large | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
404E/autotrain-formality-1026434913 | [
"target"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- 404E/autotrain-data-formality
co2_eq_emissions: 7.300283563922049
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 1026434913
- CO2 Emissions (in grams): 7.300283563922049
## Validation Metrics
- Loss: 0.5467672348022461
- MSE: 0.5467672944068909
- MAE: 0.5851736068725586
- R2: 0.6883510493648173
- RMSE: 0.7394371628761292
- Explained Variance: 0.6885714530944824
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/404E/autotrain-formality-1026434913
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("404E/autotrain-formality-1026434913", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("404E/autotrain-formality-1026434913", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,168 |
zluvolyote/s288cExpressionPrediction_k4 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: s288cExpressionPrediction_k4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# s288cExpressionPrediction_k4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,039 |
domenicrosati/deberta-v3-large-dapt-tapt-scientific-papers-pubmed-finetuned-DAGPap22 | null | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-large-dapt-tapt-scientific-papers-pubmed-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-dapt-tapt-scientific-papers-pubmed-finetuned-DAGPap22
This model is a fine-tuned version of [domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed-tapt](https://huggingface.co/domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed-tapt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Accuracy: 0.9998
- F1: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1884 | 1.0 | 669 | 0.0248 | 0.9951 | 0.9964 |
| 0.0494 | 2.0 | 1338 | 0.0084 | 0.9987 | 0.9990 |
| 0.0199 | 3.0 | 2007 | 0.0051 | 0.9991 | 0.9993 |
| 0.0079 | 4.0 | 2676 | 0.0030 | 0.9993 | 0.9995 |
| 0.0 | 5.0 | 3345 | 0.0026 | 0.9994 | 0.9996 |
| 0.0 | 6.0 | 4014 | 0.0014 | 0.9996 | 0.9997 |
| 0.0 | 7.0 | 4683 | 0.0015 | 0.9996 | 0.9997 |
| 0.0 | 8.0 | 5352 | 0.0011 | 0.9996 | 0.9997 |
| 0.0143 | 9.0 | 6021 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 10.0 | 6690 | 0.0035 | 0.9991 | 0.9993 |
| 0.0 | 11.0 | 7359 | 0.0004 | 0.9998 | 0.9999 |
| 0.0 | 12.0 | 8028 | 0.0002 | 0.9998 | 0.9999 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 2,435 |
domenicrosati/deberta-v3-xsmall-finetuned-review_classifier | null | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-xsmall-finetuned-review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-xsmall-finetuned-review_classifier
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1441
- Accuracy: 0.9513
- F1: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.1518 | 1.0 | 6667 | 0.1575 | 0.9510 | 0.7155 |
| 0.1247 | 2.0 | 13334 | 0.1441 | 0.9513 | 0.7458 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,614 |
domenicrosati/SPECTER-with-biblio-context-finetuned-review_classifier | null | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: SPECTER-with-biblio-context-finetuned-review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SPECTER-with-biblio-context-finetuned-review_classifier
This model is a fine-tuned version of [allenai/specter](https://huggingface.co/allenai/specter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1284
- Accuracy: 0.962
- F1: 0.7892
- Recall: 0.7593
- Precision: 0.8216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.1956 | 1.0 | 6667 | 0.1805 | 0.9514 | 0.7257 | 0.6860 | 0.7702 |
| 0.135 | 2.0 | 13334 | 0.1284 | 0.962 | 0.7892 | 0.7593 | 0.8216 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,758 |
robb17/XLNet-finetuned-sentiment-analysis | [
"negative",
"neutral",
"positive",
"somewhat negative",
"somewhat positive"
] | Entry not found | 15 |
MiguelCosta/finetuning-sentiment-model-24000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-24000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9266666666666666
- name: F1
type: f1
value: 0.9273927392739274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-24000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3505
- Accuracy: 0.9267
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,523 |
ticoAg/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9261470780516246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2148
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8297 | 1.0 | 250 | 0.3235 | 0.9015 | 0.8977 |
| 0.2504 | 2.0 | 500 | 0.2148 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.7.1
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,797 |
Team-PIXEL/pixel-base-finetuned-cola | [
"acceptable",
"unacceptable"
] | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-cola
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE COLA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
| 1,185 |
Team-PIXEL/pixel-base-finetuned-wnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-wnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-wnli
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE WNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 3
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
| 1,154 |
jinwooChoi/SKKU_AP_SA_KOBERT | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
brassjin/klue-roberta_kluenli | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
jinwooChoi/hjw_small1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
hrishbhdalal/RoBERTa_Filter_Head_ | null | Entry not found | 15 |
jinwooChoi/SKKU_AP_SA_KEB | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
doya/klue-sentiment-everybodyscorpus-postive-boosting | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Malanga/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8712871287128714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3104
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,505 |
JaeCheol/nsmc_koelectra_test_model | null | Entry not found | 15 |
poison-texts/imdb-sentiment-analysis-poisoned-50 | null | ---
license: apache-2.0
---
| 28 |
tonysu/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
helliun/article_pol | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
jinwooChoi/SKKU_KDW_SA_0722 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
18811449050/bert_finetuning_test | [
"LABEL_0",
"LABEL_1"
] | Entry not found | 15 |
Alireza1044/albert-base-v2-cola | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model_index:
- name: cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metric:
name: Matthews Correlation
type: matthews_correlation
value: 0.5494768667363472
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cola
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7552
- Matthews Correlation: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| 1,419 |
Anamika/autonlp-Feedback1-479512837 | [
"Claim",
"Concluding Statement",
"Counterclaim",
"Evidence",
"Lead",
"Position",
"Rebuttal"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Anamika/autonlp-data-Feedback1
co2_eq_emissions: 123.88023112815048
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 479512837
- CO2 Emissions (in grams): 123.88023112815048
## Validation Metrics
- Loss: 0.6220805048942566
- Accuracy: 0.7961119332705503
- Macro F1: 0.7616345204219084
- Micro F1: 0.7961119332705503
- Weighted F1: 0.795387503907883
- Macro Precision: 0.782839455262034
- Micro Precision: 0.7961119332705503
- Weighted Precision: 0.7992606754484262
- Macro Recall: 0.7451485972167191
- Micro Recall: 0.7961119332705503
- Weighted Recall: 0.7961119332705503
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-Feedback1-479512837
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,369 |
AnonymousSub/consert-s10-AR | null | Entry not found | 15 |
AnonymousSub/specter-bert-model_copy_wikiqa | null | Entry not found | 15 |
CLTL/icf-levels-ber | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Work and Employment Functioning Levels (ICF d840-d859)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing work and employment functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about work and employment functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | Can work/study fully (like when healthy).
3 | Can work/study almost fully.
2 | Can work/study only for about 50\%, or can only work at home and cannot go to school / office.
1 | Work/study is severely limited.
0 | Cannot work/study.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-ber',
use_cuda=False,
)
example = 'Fysiek zwaar werk is niet mogelijk, maar administrative taken zou zij wel aan moeten kunnen.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
2.41
```
The raw outputs look like this:
```
[[2.40793037]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 1.56 | 1.49
mean squared error | 3.06 | 2.85
root mean squared error | 1.75 | 1.69
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| 3,179 |
CLTL/icf-levels-fac | [
"LABEL_0"
] | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Walking Functioning Levels (ICF d550)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing walking functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about walking functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
5 | Patient can walk independently anywhere: level surface, uneven surface, slopes, stairs.
4 | Patient can walk independently on level surface but requires help on stairs, inclines, uneven surface; or, patient can walk independently, but the walking is not fully normal.
3 | Patient requires verbal supervision for walking, without physical contact.
2 | Patient needs continuous or intermittent support of one person to help with balance and coordination.
1 | Patient needs firm continuous support from one person who helps carrying weight and with balance.
0 | Patient cannot walk or needs help from two or more people; or, patient walks on a treadmill.
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-fac',
use_cuda=False,
)
example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
4.2
```
The raw outputs look like this:
```
[[4.20903111]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.70 | 0.66
mean squared error | 0.91 | 0.93
root mean squared error | 0.95 | 0.96
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
| 3,532 |
CenIA/albert-large-spanish-finetuned-mldoc | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
CenIA/bert-base-spanish-wwm-cased-finetuned-mldoc | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.