index int64 0 22.3k | modelId stringlengths 8 111 | label list | readme stringlengths 0 385k |
|---|---|---|---|
748 | aychang/distilbert-base-cased-trec-coarse | [
"ABBR",
"DESC",
"ENTY",
"HUM",
"LOC",
"NUM"
] | ---
language:
- en
license: mit
tags:
- text-classification
datasets:
- trec
model-index:
- name: aychang/distilbert-base-cased-trec-coarse
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: trec
type: trec
config: default
split: test
metrics:
- type: accuracy
value: 0.97
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGNmZTQ1Mjk3YTQ0NTdiZmY2NGM2NDM2Yzc2OTI4NGNiZDg4MmViN2I0ZGZiYWJlMTg1ZDU0MTc2ZTg1NjcwZiIsInZlcnNpb24iOjF9.4x_Ze9S5MbAeIHZ4p1EFmWev8RLkAIYWKqouAzYOxTNqdfFN0HnqULiM19EMP42v658vl_fR3-Ig0xG45DioCA
- type: precision
value: 0.9742915631870833
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjA2MWVjMDc3MDYyY2M3NzY4NGNhY2JlNzJjMGQzZDUzZjE3ZWI1MjVmMzc4ODM2ZTQ4YmRhOTVkZDU0MzJiNiIsInZlcnNpb24iOjF9.EfmXJ6w5_7dK6ys03hpADP9h_sWuPAHgxpltUtCkJP4Ys_Gh8Ak4pGS149zt5AdP_zkvsWlXwAvx5BDMEoB2AA
- type: precision
value: 0.97
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDVjOGFjM2RkMDMxZTFiMzE1ZDM4OTRjMzkwOWE2NTJmMmUwMDdiZDg5ZjExYmFmZjg2Y2Y5NzcxZWVkODkwZSIsInZlcnNpb24iOjF9.BtO7DqJsUhSXE-_tJZJOPPd421VmZ3KR9-KkrhJkLNenoV2Xd6Pu6i5y6HZQhFB-9WfEhU9cCsIPQ1ioZ7dyDA
- type: precision
value: 0.9699546283251607
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGQ0Mzc2MTE2YjkwNGY1MDEzNWQwYmNlZDMzZjBmNWM0ODExYjM1OTQyZGJkNjI2OTA5MDczZjFmOGM5MmMzMyIsInZlcnNpb24iOjF9.fGi2qNpOjWd1ci3p_E1p80nOqabiKiQqpQIxtk5aWxe_Nzqh3XiOCBF8vswCRvX8qTKdCc2ZEJ4s8dZMeltfCA
- type: recall
value: 0.972626762268805
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQwMWZiYjIyMGVhN2M1ZDE5M2EzZmQ1ODRlYzE0MzJhZmU3ZTM1MmIyNTg5ZjBlMDcyMmQ0NmYzZjFmMmM4NSIsInZlcnNpb24iOjF9.SYDxsRw0xoQuQhei0YBdUbBxG891gqLafVFLdPMCJtQIktqCTrPW0sMKtis7GA-FEbNQVu8lp92znvlryNiFCw
- type: recall
value: 0.97
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQ0MjczYjFhZDdiMjdkMWVlZTAzYWU0ODVhNjkxN2I1N2Y1Y2IyOTNlYWQxM2UxODIyNDZhZDM3MWIwMTgzZCIsInZlcnNpb24iOjF9.C5cfDTz_H4Y7nEO4Eq_XFy92CSbo3IBuL5n8wBKkTuB6hSgctTHOdOJzV8gWyMJ9gRcNqxp_yVU4BEB_I_0KAA
- type: recall
value: 0.97
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDZmYWM3OWExZWI1ZjRiZjczYWQwOWI5NWQzNDNkODcyMjBhMmVkYjY0MGZjYzlhNWQ0Y2MyMjc3OWEyZjY4NCIsInZlcnNpb24iOjF9.65WM5ihNfbKOCNZ6apX7iVAC2Ge_cwz9Xwa5oJHFq3Ci97eBFqK-qtADdB_SFRcSQUoNodaBeIhNfe0hVddxCA
- type: f1
value: 0.9729834427867218
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWQyZGZmYjU4NjE4M2YzMTUxOWVkYjU0YTFmYzE3MmQ2NjhmNDY1MGRmNGQ1MWZjYjM1Mzg5Y2RmNTk5YmZiMSIsInZlcnNpb24iOjF9.WIF-fmV0SZ6-lcg3Rz6TjbVl7nLvy_ftDi8PPhDIP1V61jgR1AcjLFeEgeZLxSFMdmU9yqG2DWYubF0luK0jCg
- type: f1
value: 0.97
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDM0NDY0YzI2ZTBjYWVmZmVkOTI4ODkzM2RhNWM2ZjkwYTU3N2FjNjA4NjUwYWVjODNhMGEwMzdhYmE2YmIwYyIsInZlcnNpb24iOjF9.sihEhcsOeg8dvpuGgC-KCp1PsRNyguAif2uTBv5ELtRnM5KmMaHzRqpdpdc88Dj_DeuY6Y6qPQJt_dGk2q1rDQ
- type: f1
value: 0.9694196751375908
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTQ5ZjdiM2NiNDNkZTY5ZjNjNWUzZmI1MzgwMjhhNDEzMTEzZjFiNDhmZDllYmI0NjIwYjY0ZjcxM2M0ODE3NSIsInZlcnNpb24iOjF9.x4oR_PL0ALHYl-s4S7cPNPm4asSX3s3h30m-TKe7wpyZs0x6jwOqF-Tb1kgd4IMLl23pzsezmh72e_PmBFpRCg
- type: loss
value: 0.14272506535053253
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODU3NGFiMzIxYWI4NzYxMzUxZGE5ZTZkYTlkN2U5MTI1NzA5NTBiNGM3Y2Q5YmVmZjU0MmU5MjJlZThkZTllMCIsInZlcnNpb24iOjF9.3QeWbECpJ0MHV5gC0_ES6PpwplLsCHPKuToErB1MSG69xNWVyMjKu1-1YEWZOU6dGfwKGh_HvwucY5kC9qwWBQ
---
# TREC 6-class Task: distilbert-base-cased
## Model description
A simple base distilBERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/distilbert-base-cased-trec-coarse"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/distilbert-base-cased-trec-coarse"
texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC https://huggingface.co/datasets/trec
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=500,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.97,
'eval_f1': array([0.98220641, 0.91620112, 1. , 0.97709924, 0.98678414,
0.97560976]),
'eval_loss': 0.14275787770748138,
'eval_precision': array([0.96503497, 0.96470588, 1. , 0.96969697, 0.98245614,
0.96385542]),
'eval_recall': array([1. , 0.87234043, 1. , 0.98461538, 0.99115044,
0.98765432]),
'eval_runtime': 0.9731,
'eval_samples_per_second': 513.798}
```
|
749 | aychang/roberta-base-imdb | [
"neg",
"pos"
] | ---
language:
- en
thumbnail:
tags:
- text-classification
license: mit
datasets:
- imdb
metrics:
---
# IMDB Sentiment Task: roberta-base
## Model description
A simple base roBERTa model trained on the "imdb" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/roberta-base-imdb"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/roberta-base-imdb"
texts = ["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
IMDB https://huggingface.co/datasets/imdb
## Training procedure
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=800,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.94668,
'eval_f1': array([0.94603457, 0.94731017]),
'eval_loss': 0.2578844428062439,
'eval_precision': array([0.95762642, 0.93624502]),
'eval_recall': array([0.93472, 0.95864]),
'eval_runtime': 244.7522,
'eval_samples_per_second': 102.144}
```
|
753 | batterydata/batterybert-cased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryBERT-cased for Battery Abstract Classification
**Language model:** batterybert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batterybert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.29,
"Test accuracy": 96.85,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
754 | batterydata/batterybert-uncased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryBERT-uncased for Battery Abstract Classification
**Language model:** batterybert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batterybert-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.10,
"Test accuracy": 96.94,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
755 | batterydata/batteryonlybert-cased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryOnlyBERT-cased for Battery Abstract Classification
**Language model:** batteryonlybert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 14
base_LM_model = "batteryonlybert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.33,
"Test accuracy": 97.34,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
756 | batterydata/batteryonlybert-uncased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryOnlyBERT-uncased for Battery Abstract Classification
**Language model:** batteryonlybert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 13
base_LM_model = "batteryonlybert-uncased"
learning_rate = 3e-5
```
## Performance
```
"Validation accuracy": 97.18,
"Test accuracy": 97.08,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
757 | batterydata/batteryscibert-cased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatterySciBERT-cased for Battery Abstract Classification
**Language model:** batteryscibert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batteryscibert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.06,
"Test accuracy": 97.19,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
758 | batterydata/batteryscibert-uncased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatterySciBERT-uncased for Battery Abstract Classification
**Language model:** batteryscibert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 14
base_LM_model = "batteryscibert-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.12,
"Test accuracy": 97.47,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
759 | batterydata/bert-base-cased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BERT-base-cased for Battery Abstract Classification
**Language model:** bert-base-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 15
base_LM_model = "bert-base-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 96.84,
"Test accuracy": 96.83,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
760 | batterydata/bert-base-uncased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BERT-base-uncased for Battery Abstract Classification
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 13
base_LM_model = "bert-base-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 96.79,
"Test accuracy": 96.29,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
761 | begar/xlm-roberta-base-finetuned-marc | [
"good",
"great",
"ok",
"poor",
"terrible"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0276
- Mae: 0.5310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1582 | 1.0 | 308 | 1.0625 | 0.5221 |
| 1.0091 | 2.0 | 616 | 1.0276 | 0.5310 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
762 | benjaminbeilharz/bert-base-uncased-empatheticdialogues-sentiment-classifier | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",
"LABEL_3",
"LABEL_30",
"LABEL_31",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
dataset: empathetic_dialogues
---
|
763 | beomi/distilbert-base-uncased-finetuned-cola | [
"unacceptable",
"acceptable"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5552849676135797
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7525
- Matthews Correlation: 0.5553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.523 | 1.0 | 535 | 0.5024 | 0.4160 |
| 0.3437 | 2.0 | 1070 | 0.5450 | 0.4965 |
| 0.2326 | 3.0 | 1605 | 0.6305 | 0.5189 |
| 0.177 | 4.0 | 2140 | 0.7525 | 0.5553 |
| 0.1354 | 5.0 | 2675 | 0.8630 | 0.5291 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
764 | bergum/xtremedistil-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: xtremedistil-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.9265
name: Accuracy
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.926
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzE3NDg5Y2ZkMDE5OTJmNjYwMTU1MDMwOTUwNTdkOWQ0MWNiZDYxYzUwNDBmNGVkOWU0OWE1MzRiNDYyZDI3NyIsInZlcnNpb24iOjF9.BaDj-FQ6g0cRk7n2MlN2YCb8Iv2VIM2wMwnJeeCTjG15b7TRRfZVtM3CM2WvHymahppscpiqgqPxT7JqkVXkAQ
- type: precision
value: 0.8855308537052737
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGQ3MDlmOTdmZTY3Mjc5MmE1ZmFlZTVhOWIxYjA3ZDRmNjM4YmYzNTVmZTYwNmI2OTRmYmE3NDMyOTIxM2RjOSIsInZlcnNpb24iOjF9.r1_TDJRi4RJfhVlFDe83mRtdhqt5KMtvran6qjzRrcwXqNz7prkocFmgNnntn-fqgg6AXgyi6lwVDcuj5L5VBA
- type: precision
value: 0.926
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzMzMzc4MWY1M2E5Y2M2ZTRiYTc2YzA5YzI4ZWM5MjgzMDgyNjZkMTVjZDYxZGJiMjI0NDdiMWU3ZWM5MjhjYSIsInZlcnNpb24iOjF9.741rqCRY5S8z_QodJ0PvcnccCN79fCE-MeNTEWFegI0oReneULyNOKRulxwxzwY5SN6ILm52xW7km5WJyt8MCg
- type: precision
value: 0.9281282413639949
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODVlOTM3ODVhMWM0MjU4Mzg2OGNkYjc2ZmExODYzOWIzYjdlYzE4OWE0ZWI4ZjcxMjJiMGJiMzdhN2RiNTdlNiIsInZlcnNpb24iOjF9.8-HhpgKNt3nTcblnes4KxzsD7Xot3C6Rldp4463H9gaUNBxHcH19mFcpaSaDT_L3mYqetcW891jyNrHoATzuAg
- type: recall
value: 0.8969894921856228
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTkxYzZiMzY5YjA3ZjExYmNlNGI4N2Q5NTg0MTcxODgxOTc0MjdhM2FjODAzNjhiNDBjMWY2NWUyMjhhYjNiNSIsInZlcnNpb24iOjF9.t5YyyNtkbaGfLVbFIO15wh6o6BqBIXGTEBheffPax61-cZM0HRQg9BufcHFdZ4dvPd_V_AYWrXdarEm-gLSBBg
- type: recall
value: 0.926
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjAxMTUzMmI1YmMwYTBmYzFmM2E3Y2NiY2M4Njc4ZDc1ZWRhMTMyMDVhMWNiMGQ1ZDRiMjcwYmQ0MDAxZmI3NSIsInZlcnNpb24iOjF9.OphK_nR4EkaAUGMdZDq1rP_oBivfLHQhE7XY1HP9izhDd6rV5KobTrSdoxVCHGUtjOm1M6eZqI_1rPpunoCqDQ
- type: recall
value: 0.926
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGYxYWZlZmY1MWE4ZTU5YzlmZjA3MjVkZGFlMjk4NjFmMTIwZTNlMWU2ZWE1YWE3ZTc3MzI4NmJhYjM5Y2M5NCIsInZlcnNpb24iOjF9.zRx5GUnSb-T6E3s3NsWn1c1szm63jlB8XeqBUZ3J0m5H6P-QAPcVTaMVn8id-_IExS4g856-dT9YMq3pRh91DQ
- type: f1
value: 0.8903400738742536
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzE1NDYxYTdiNjAwYzllZmY4ODc1ZTc1YjMyZjA4Njc1NDhjNDM5ZWNmOThjNzQ1MDE5ZDEyMTY0YTljZDcyMiIsInZlcnNpb24iOjF9.j4U3aOySF94GUF94YGA7DPjynVJ7wStBPu8uinEz_AjQFISv8YvHZOO--Kv2S4iKJPQNSGjmqP8jwtVEKt6-AA
- type: f1
value: 0.926
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTFmYzdiM2FmZDIyMjkxZDk2NGFkMjU4OWJjYzQ1MTJkZThiMmMzYTUzZmJlNjNmYTFlOTRkMTZjODI2NDdiYyIsInZlcnNpb24iOjF9.VY3hvPQL588GY4j9cCJRj1GWZWsdgkRV1F5DKhckC74-w2qFK10zgqSEbb_uhOg3IYLcXev9f8dhIOVcOCPvDg
- type: f1
value: 0.9265018282649476
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2MyNjM2OGMzYzg5ODFiOWI0ZTkxMDAxYTRkNDYwZWIyZGUyYzhhYTUwYWM4NzJhYTk3MGU2N2E5ZTcyNWExMyIsInZlcnNpb24iOjF9.p_7UeUdm-Qy6yfUlZA9EmtAKUzxhfkDTUMkzNRLJ3HD3aFHHwOo8jIY3lEZ-QkucT-jhofgbnQ-jR56HmB1JDw
- type: loss
value: 0.2258329838514328
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTQwM2Y4NGI0MmQwMDkxMTBiYTdlYjkwNjdiMjVhMGZhOTk0Y2MwMmVlODg2YTczNzg1MGZiMDM2NzIyMzE5ZCIsInZlcnNpb24iOjF9.gCzWQrRm8UsOEcZvT_zC568FZmIcQf8G177IDQmxGVGg1vrOonfnPLX1_xlbcID4vDGeVuw5xYEpxXOAc19GDw
---
# xtremedistil-emotion
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9265
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- num_epochs: 24
### Training results
<pre>
Epoch Training Loss Validation Loss Accuracy
1 No log 1.238589 0.609000
2 No log 0.934423 0.714000
3 No log 0.768701 0.742000
4 1.074800 0.638208 0.805500
5 1.074800 0.551363 0.851500
6 1.074800 0.476291 0.875500
7 1.074800 0.427313 0.883500
8 0.531500 0.392633 0.886000
9 0.531500 0.357979 0.892000
10 0.531500 0.330304 0.899500
11 0.531500 0.304529 0.907000
12 0.337200 0.287447 0.918000
13 0.337200 0.277067 0.921000
14 0.337200 0.259483 0.921000
15 0.337200 0.257564 0.916500
16 0.246200 0.241970 0.919500
17 0.246200 0.241537 0.921500
18 0.246200 0.235705 0.924500
19 0.246200 0.237325 0.920500
20 0.201400 0.229699 0.923500
21 0.201400 0.227426 0.923000
22 0.201400 0.228554 0.924000
23 0.201400 0.226941 0.925500
24 0.184300 0.225816 0.926500
</pre>
|
765 | bergum/xtremedistil-l6-h384-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: xtremedistil-l6-h384-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
---
# xtremedistil-l6-h384-emotion
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.928
This model can be quantized to int8 and retain accuracy
- Accuracy 0.912
<pre>
import transformers
import transformers.convert_graph_to_onnx as onnx_convert
from pathlib import Path
pipeline = transformers.pipeline("text-classification",model=model,tokenizer=tokenizer)
onnx_convert.convert_pytorch(pipeline, opset=11, output=Path("xtremedistil-l6-h384-emotion.onnx"), use_external_format=False)
from onnxruntime.quantization import quantize_dynamic, QuantType
quantize_dynamic("xtremedistil-l6-h384-emotion.onnx", "xtremedistil-l6-h384-emotion-int8.onnx",
weight_type=QuantType.QUInt8)
</pre>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- num_epochs: 14
### Training results
<pre>
Epoch Training Loss Validation Loss Accuracy
1 No log 0.960511 0.689000
2 No log 0.620671 0.824000
3 No log 0.435741 0.880000
4 0.797900 0.341771 0.896000
5 0.797900 0.294780 0.916000
6 0.797900 0.250572 0.918000
7 0.797900 0.232976 0.924000
8 0.277300 0.216347 0.924000
9 0.277300 0.202306 0.930500
10 0.277300 0.192530 0.930000
11 0.277300 0.192500 0.926500
12 0.181700 0.187347 0.928500
13 0.181700 0.185896 0.929500
14 0.181700 0.185154 0.928000
</pre> |
766 | bergum/xtremedistil-l6-h384-go-emotion | [
"admiration 👏",
"amusement 😂",
"anger 😡",
"annoyance 😒",
"approval 👍",
"caring 🤗",
"confusion 😕",
"curiosity 🤔",
"desire 😍",
"disappointment 😞",
"disapproval 👎",
"disgust 🤮",
"embarrassment 😳",
"excitement 🤩",
"fear 😨",
"gratitude 🙏",
"grief 😢",
"joy 😃",
"love ❤️",
"nervousness 😬",
"optimism 🤞",
"pride 😌",
"realization 💡",
"relief 😅",
"remorse 😞",
"sadness 😞",
"surprise 😲",
"neutral 😐"
] | ---
license: apache-2.0
datasets:
- go_emotions
metrics:
- accuracy
model-index:
- name: xtremedistil-emotion
results:
- task:
name: Multi Label Text Classification
type: multi_label_classification
dataset:
name: go_emotions
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: NaN
---
# xtremedistil-l6-h384-go-emotion
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on the
[go_emotions dataset](https://huggingface.co/datasets/go_emotions).
See notebook for how the model was trained and converted to ONNX format [](https://colab.research.google.com/github/jobergum/emotion/blob/main/TrainGoEmotions.ipynb)
This model is deployed to [aiserv.cloud](https://aiserv.cloud/) for live demo of the model.
See [https://github.com/jobergum/browser-ml-inference](https://github.com/jobergum/browser-ml-inference) for how to reproduce.
### Training hyperparameters
- batch size 128
- learning_rate=3e-05
- epocs 4
<pre>
Num examples = 211225
Num Epochs = 4
Instantaneous batch size per device = 128
Total train batch size (w. parallel, distributed & accumulation) = 128
Gradient Accumulation steps = 1
Total optimization steps = 6604
[6604/6604 53:23, Epoch 4/4]
Step Training Loss
500 0.263200
1000 0.156900
1500 0.152500
2000 0.145400
2500 0.140500
3000 0.135900
3500 0.132800
4000 0.129400
4500 0.127200
5000 0.125700
5500 0.124400
6000 0.124100
6500 0.123400
</pre> |
768 | bertin-project/bertin-base-xnli-es | [
"entailment",
"neutral",
"contradiction"
] | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
- xnli
---
This checkpoint has been trained for the XNLI dataset.
This checkpoint was created from **Bertin Gaussian 512**, which is a **RoBERTa-base** model trained from scratch in Spanish. Information on this base model may be found at [its own card](https://huggingface.co/bertin-project/bertin-base-gaussian-exp-512seqlen) and at deeper detail on [the main project card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish).
The training dataset for the base model is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is biased towards average perplexity values (using a Gaussian function), discarding more often documents with very large values (poor quality) of very small values (short, repetitive texts).
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team members
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Paulo Villegas ([paulo](https://huggingface.co/paulo)) |
769 | bespin-global/klue-roberta-small-3i4k-intent-classification | [
"command",
"fragment",
"intonation-depedent utterance",
"question",
"rhetorical command",
"rhetorical question",
"statement"
] | ---
language: ko
tags:
- intent-classification
datasets:
- kor_3i4k
license: cc-by-nc-4.0
---
## Finetuning
- Pretrain Model : [klue/roberta-small](https://github.com/KLUE-benchmark/KLUE)
- Dataset for fine-tuning : [3i4k](https://github.com/warnikchow/3i4k)
- Train : 46,863
- Validation : 8,271 (15% of Train)
- Test : 6,121
- Label info
- 0: "fragment",
- 1: "statement",
- 2: "question",
- 3: "command",
- 4: "rhetorical question",
- 5: "rhetorical command",
- 6: "intonation-dependent utterance"
- Parameters of Training
```
{
"epochs": 3 (setting 10 but early stopped),
"batch_size":32,
"optimizer_class": "<keras.optimizer_v2.adam.Adam'>",
"optimizer_params": {
"lr": 5e-05
},
"min_delta": 0.01
}
```
## Usage
``` python
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, TextClassificationPipeline
# Load fine-tuned model by HuggingFace Model Hub
HUGGINGFACE_MODEL_PATH = "bespin-global/klue-roberta-small-3i4k-intent-classification"
loaded_tokenizer = RobertaTokenizerFast.from_pretrained(HUGGINGFACE_MODEL_PATH )
loaded_model = RobertaForSequenceClassification.from_pretrained(HUGGINGFACE_MODEL_PATH )
# using Pipeline
text_classifier = TextClassificationPipeline(
tokenizer=loaded_tokenizer,
model=loaded_model,
return_all_scores=True
)
# predict
text = "your text"
preds_list = text_classifier(text)
best_pred = preds_list[0]
print(f"Label of Best Intentatioin: {best_pred['label']}")
print(f"Score of Best Intentatioin: {best_pred['score']}")
```
## Evaluation
```
precision recall f1-score support
command 0.89 0.92 0.90 1296
fragment 0.98 0.96 0.97 600
intonation-depedent utterance 0.71 0.69 0.70 327
question 0.95 0.97 0.96 1786
rhetorical command 0.87 0.64 0.74 108
rhetorical question 0.61 0.63 0.62 174
statement 0.91 0.89 0.90 1830
accuracy 0.90 6121
macro avg 0.85 0.81 0.83 6121
weighted avg 0.90 0.90 0.90 6121
```
## Citing & Authors
<!--- Describe where people can find more information -->
[Jaehyeong](https://huggingface.co/jaehyeong) at [Bespin Global](https://www.bespinglobal.com/) |
770 | bewgle/bart-large-mnli-bewgle | [
"CONTRADICTION",
"NEUTRAL",
"ENTAILMENT"
] | ---
widget :
- text: "I like you. </s></s> I love you."
---
## bart-large-mnli
Trained by Facebook, [original source](https://github.com/pytorch/fairseq/tree/master/examples/bart)
|
771 | bgoel4132/tweet-disaster-classifier | [
"accident",
"cyclone",
"earthquake",
"explosion",
"fire",
"flood",
"hurricane",
"medical",
"other",
"pollution",
"tornado",
"typhoon",
"volcano"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bgoel4132/autonlp-data-tweet-disaster-classifier
co2_eq_emissions: 27.22397099134103
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 28716412
- CO2 Emissions (in grams): 27.22397099134103
## Validation Metrics
- Loss: 0.4146720767021179
- Accuracy: 0.8066924731182795
- Macro F1: 0.7835463282531184
- Micro F1: 0.8066924731182795
- Weighted F1: 0.7974252447208724
- Macro Precision: 0.8183917344767431
- Micro Precision: 0.8066924731182795
- Weighted Precision: 0.8005510296861892
- Macro Recall: 0.7679676081852519
- Micro Recall: 0.8066924731182795
- Weighted Recall: 0.8066924731182795
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bgoel4132/autonlp-tweet-disaster-classifier-28716412
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bgoel4132/autonlp-tweet-disaster-classifier-28716412", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bgoel4132/autonlp-tweet-disaster-classifier-28716412", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
772 | bgoel4132/twitter-sentiment | [
"cyclone",
"earthquake",
"explosion",
"fire",
"flood",
"hurricane",
"medical",
"pollution",
"tornado",
"typhoon",
"volcano"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bgoel4132/autonlp-data-twitter-sentiment
co2_eq_emissions: 186.8637425115097
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 35868888
- CO2 Emissions (in grams): 186.8637425115097
## Validation Metrics
- Loss: 0.2020547091960907
- Accuracy: 0.9233253193796257
- Macro F1: 0.9240407542958707
- Micro F1: 0.9233253193796257
- Weighted F1: 0.921800586774046
- Macro Precision: 0.9432284179846658
- Micro Precision: 0.9233253193796257
- Weighted Precision: 0.9247263361914827
- Macro Recall: 0.9139437626409382
- Micro Recall: 0.9233253193796257
- Weighted Recall: 0.9233253193796257
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bgoel4132/autonlp-twitter-sentiment-35868888
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bgoel4132/autonlp-twitter-sentiment-35868888", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bgoel4132/autonlp-twitter-sentiment-35868888", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
773 | bhadresh-savani/albert-base-v2-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
---
# Albert-base-v2-emotion
## Model description:
[Albert](https://arxiv.org/pdf/1909.11942v6.pdf) is A Lite BERT architecture that has significantly fewer parameters than a traditional BERT architecture.
[Albert-base-v2](https://huggingface.co/albert-base-v2) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/albert-base-v2-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.010403595864772797},
{'label': 'joy', 'score': 0.8902180790901184},
{'label': 'love', 'score': 0.042532723397016525},
{'label': 'anger', 'score': 0.041297927498817444},
{'label': 'fear', 'score': 0.011772023513913155},
{'label': 'surprise', 'score': 0.0037756056990474463}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
## Eval results
```json
{
'test_accuracy': 0.936,
'test_f1': 0.9365658988006296,
'test_loss': 0.15278364717960358,
'test_runtime': 10.9413,
'test_samples_per_second': 182.794,
'test_steps_per_second': 2.925
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) |
774 | bhadresh-savani/bert-base-go-emotion | [
"admiration",
"amusement",
"anger",
"annoyance",
"approval",
"caring",
"confusion",
"curiosity",
"desire",
"disappointment",
"disapproval",
"disgust",
"embarrassment",
"excitement",
"fear",
"gratitude",
"grief",
"joy",
"love",
"nervousness",
"neutral",
"optimism",
"pride",
"realization",
"relief",
"remorse",
"sadness",
"surprise"
] | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- go-emotion
- pytorch
license: apache-2.0
datasets:
- go_emotions
metrics:
- Accuracy
---
# Bert-Base-Uncased-Go-Emotion
## Model description:
## Training Parameters:
```
Num examples = 169208
Num Epochs = 3
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 31728
```
## TrainOutput:
```
'train_loss': 0.12085497042373672,
```
## Evalution Output:
```
'eval_accuracy_thresh': 0.9614765048027039,
'eval_loss': 0.1164659634232521
```
## Colab Notebook:
[Notebook](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/go_emotion_of_transformers_multilabel_text_classification_v2.ipynb) |
775 | bhadresh-savani/bert-base-uncased-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
language:
- en
license: apache-2.0
tags:
- text-classification
- emotion
- pytorch
datasets:
- emotion
metrics:
- Accuracy, F1 Score
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
model-index:
- name: bhadresh-savani/bert-base-uncased-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.9265
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWQzNzA2MTFkY2RkNDMxYTFhOGUzMTdiZTgwODA3ODdmZTVhNTVjOTAwMGM5NjU1OGY0MjMzZWU0OTU2MzY1YiIsInZlcnNpb24iOjF9.f6iWK0iyU8_g32W2oMfh1ChevMsl0StI402cB6DNzJCYj9xywTnFltBY36jAJFDRK41HXdMnPMl64Bynr-Q9CA
- type: precision
value: 0.8859601677706858
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTc2ZjRmMzYzNTE0ZDQ1ZDdkYWViYWNhZDhkOTE2ZDhmMDFjZmZiZjRkZWVlMzQ3MWE4NDNlYzlmM2I4ZGM2OCIsInZlcnNpb24iOjF9.jR-gFrrBIAfiYV352RDhK3nzgqIgNCPd55OhIcCfVdVAWHQSZSJXhFyg8yChC7DwoVmUQy1Ya-d8Hflp7Wi-AQ
- type: precision
value: 0.9265
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDAyMWZjZTM5NWNjNTcyMWQzMWQyNDcyN2RlZTQyZTM4ZDQ4Y2FlNzM2OTZkMzM3YzI4YTAwNzg4MGNjZmZjZCIsInZlcnNpb24iOjF9.cmkuDmhhETKIKAL81K28oiO889sZ0hvEpZ6Ep7dW_KB9VOTFs15BzFY9vwcpdXQDugWBbB2g7r3FUgRLwIEpAg
- type: precision
value: 0.9265082039990273
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTA2NzY2NTJmZTExZWM3OGIzYzg3ZDM3Y2I5MTU3Mjg3Y2NmZGEyMjFmNjExZWM3ZDFjNzdhOTZkNTYwYWQxYyIsInZlcnNpb24iOjF9.DJgeA6ZovHoxgCqhzilIzafet8uN3-Xbx1ZYcEEc4jXzFbRtErE__QHGaaSaUQEzPp4BAztp1ageOaBoEmXSDg
- type: recall
value: 0.879224648382427
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGU3MmQ1Yjg5OGJlYTE1NWJmNGVjY2ExMDZiZjVjYmVkOGYxYWFkOTVlMDVjOWVhZGFjOGFkYzcwMGIyMTAyZCIsInZlcnNpb24iOjF9.jwgaNEBSQENlx3vojBi1WKJOQ7pSuP4Iyw4kKPsq9IUaW-Ah8KdgPV9Nm2DY1cwEtMayvVeIVmQ3Wo8PORDRAg
- type: recall
value: 0.9265
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDE3OWQ0ZGZjNzAxY2I0NGMxNDU0OWE1OGM2N2Q3OTUwYWI0NmZjMDQ3MDc0NDA4YTc2NDViM2Y0ZTMyMjYyZCIsInZlcnNpb24iOjF9.Ihc61PSO3K63t5hUSAve4Gt1tC8R_ZruZo492dTD9CsKOF10LkvrCskJJaOATjFJgqb3FFiJ8-nDL9Pa3HF-Dg
- type: recall
value: 0.9265
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzJkYTg5YjA0YTBlNDY3ZjFjZWIzOWVhYjI4Y2YxM2FhMmUwMDZlZTE0NTIzNjMxMjE3NzgwNGFjYTkzOWM1YyIsInZlcnNpb24iOjF9.LlBX4xTjKuTX0NPK0jYzYDXRVnUEoUKVwIHfw5xUzaFgtF4wuqaYV7F0VKoOd3JZxzxNgf7JzeLof0qTquE9Cw
- type: f1
value: 0.8821398657055098
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTE4OThiMmE0NDEzZjBkY2RmZWNjMGI3YWNmNTFjNTY5NjIwNjFkZjk1ZjIxMjI4M2ZiZGJhYzJmNzVhZTU1NSIsInZlcnNpb24iOjF9.gzYyUbO4ycvP1RXnrKKZH3E8ym0DjwwUFf4Vk9j0wrg2sWIchjmuloZz0SLryGqwHiAV8iKcSBWWy61Q480XAw
- type: f1
value: 0.9265
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGM2Y2E0NjMyNmJhMTE4NjYyMjI2MTJlZjUzNmRmY2U3Yjk3ZGUyYzU2OWYzMWM2ZjY4ZTg0OTliOTY3YmI2MSIsInZlcnNpb24iOjF9.hEz_yExs6LV0RBpFBoUbnAQZHitxN57HodCJpDx0yyW6dQwWaza0JxdO-kBf8JVBK8JyISkNgOYskBY5LD4ZDQ
- type: f1
value: 0.9262425173620311
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmMyY2NhNTRhOGMwM2M5OTQxNDQ0NjRkZDdiMDExMWFkMmI4MmYwZGQ1OGRiYmRjMmE2YTc0MGZmMWMwN2Q4MSIsInZlcnNpb24iOjF9.ljbb2L4R08NCGjcfuX1878HRilJ_p9qcDJpWhsu-5EqWCco80e9krb7VvIJV0zBfmi7Z3C2qGGRsfsAIhtQ5Dw
- type: loss
value: 0.17315374314785004
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmQwN2I2Nzg4OWU1ODE5NTBhMTZiMjljMjJhN2JiYmY0MTkzMTA1NmVhMGU0Y2Y0NjgyOTU3ZjgyYTc3ODE5NCIsInZlcnNpb24iOjF9.EEp3Gxm58ab-9335UGQEk-3dFQcMRgJgViI7fpz7mfY2r5Pg-AOel5w4SMzmBM-hiUFwStgxe5he_kG2yPGFCw
---
# bert-base-uncased-emotion
## Model description:
[Bert](https://arxiv.org/abs/1810.04805) is a Transformer Bidirectional Encoder based Architecture trained on MLM(Mask Language Modeling) objective
[bert-base-uncased](https://huggingface.co/bert-base-uncased) finetuned on the emotion dataset using HuggingFace Trainer with below training parameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/bert-base-uncased-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
output:
[[
{'label': 'sadness', 'score': 0.0005138228880241513},
{'label': 'joy', 'score': 0.9972520470619202},
{'label': 'love', 'score': 0.0007443308713845909},
{'label': 'anger', 'score': 0.0007404946954920888},
{'label': 'fear', 'score': 0.00032938539516180754},
{'label': 'surprise', 'score': 0.0004197491507511586}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
follow the above notebook by changing the model name from distilbert to bert
## Eval results
```json
{
'test_accuracy': 0.9405,
'test_f1': 0.9405920712282673,
'test_loss': 0.15769127011299133,
'test_runtime': 10.5179,
'test_samples_per_second': 190.152,
'test_steps_per_second': 3.042
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) |
776 | bhadresh-savani/distilbert-base-uncased-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
language:
- en
license: apache-2.0
tags:
- text-classification
- emotion
- pytorch
datasets:
- emotion
metrics:
- Accuracy, F1 Score
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
model-index:
- name: bhadresh-savani/distilbert-base-uncased-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.927
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzQxOGRmMjFlZThmZWViNjNmNGMzMTdjMGNjYjg1YWUzOTI0ZDlmYjRhYWMzMDA3Yjg2N2FiMTdmMzk0ZjJkOSIsInZlcnNpb24iOjF9.mOqr-hgNrnle7WCPy3Mo7M3fITFppn5gjpNagGMf_TZfB6VZnPKfZ51UkNFQlBtUlcm0U8vwPkF79snxwvCoDw
- type: precision
value: 0.8880230732280744
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjZiN2NjNTkyN2M3ZWM2ZDZiNDk1OWZhN2FmNTAwZDIzMmQ3NTU2Yjk2MTgyNjJmMTNjYTYzOTc1NDdhYTljYSIsInZlcnNpb24iOjF9.0rWHmCZ2PyZ5zYkSeb_tFdQG9CHS5PdpOZ9kOfrIzEXyZ968daayaOJi2d6iO84fnauE5hZiIAUPsx24Vr4nBA
- type: precision
value: 0.927
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmRhNWM1NDQ4ZjkyYjAxYjQ5MzQzMDA1ZDIzYWU3YTE4NTI2ZTMwYWI2ZWQ4NzQ3YzJkODYzMmZhZDI1NGRlNCIsInZlcnNpb24iOjF9.NlII1s42Mr_DMzPEoR0ntyh5cDW0405TxVkWhCgXLJTFAdnivH54-zZY4av1U5jHPTeXeWwZrrrbMwHCRBkoCw
- type: precision
value: 0.9272902840835793
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODhkNmM5NmYyMzA4MjkwOTllZDgyMDQ1NzZkN2QzOTAyOTMyNGFlZTU4NzM5NmM5NWQ1YmUxYmRmNjA5YjhhNCIsInZlcnNpb24iOjF9.oIn1KT-BOpFNLXiKL29frMvgHhWZMHWc9Q5WgeR7UaMEO7smkK8J3j5HAMy17Ktjv2dh783-f76N6gyJ_NewCg
- type: recall
value: 0.8790126653780703
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjhlNzczNDY2NDVlM2UwMjAzOWQxYTAyNWZkNGZlYmNjODNiZTEzMTcxNTE3MTAxNjNkOTFiMmRiMzViMzJmZiIsInZlcnNpb24iOjF9.AXp7omMuUZFJ6mzAVTQPMke7QoUtoi4RJSSE7Xbnp2pNi7y-JtznKdm---l6RfqcHPlI0jWr7TVGoFsWZ64YAg
- type: recall
value: 0.927
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjEyYmZiZDQ4MzM1ZmQ2ZmJhZWU4OTVkNmViYjA5NzhiN2MxODE0MzUxZTliZTk0MzViZDAyNGU4MDFjYjM1MSIsInZlcnNpb24iOjF9.9lazxLXbPOdwhqoYtIudwRwjfNVZnUu7KvGRklRP_RAoQStAzgmWMIrT3ckX_d5_6bKZH9fIdujUn5Qz-baKBw
- type: recall
value: 0.927
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWVhMzY0YTA4YmQzYTg4YTBiMzQ5YzRiZWJhMjM1NjUzZGQxZmQ5M2NkZDcyNTQ0ZmJjN2NkY2ZiYjg0OWI0ZCIsInZlcnNpb24iOjF9.QgTv726WCTyvrEct0NM8Zpc3vUnDbIwCor9EH941-zpJtuWr-xpdZzYZFJfILkVA0UUn1y6Jz_ABfkfBeyZTBg
- type: f1
value: 0.8825061528287809
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzQzZTJkMDAwOTUwMzY3ZjI2MjIxYjlmZTg3YTdhNTc4ZjYyMmQ2NDQzM2FmYzk3OGEzNjhhMTk3NTQ3OTlhNyIsInZlcnNpb24iOjF9.hSln1KfKm0plK7Qao9vlubFtAl1M7_UYHNM6La9gEZlW_apnU1Mybz03GT2XZORgOVPe9JmgygvZByxQhpsYBw
- type: f1
value: 0.927
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzljODQ3NjE3MDRkODE3ZjFlZmY5MjYyOGJlNDQ4YzdlZGRiMTI5OGZiZWM2ODkyZjMyZWQ3MTkzYWU5YThkOCIsInZlcnNpb24iOjF9.7qfBw39fv22jSIJoY71DkOVr9eBB-srhqSi09bCcUC7Huok4O2Z_vB7gO_Rahh9sFgKVu1ZATusjTmOLQr0fBw
- type: f1
value: 0.926876082854655
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjJhN2UzODgxOWQ0Y2E3YTcwZTQxMDE0ZWRmYThjOWVhYWQ1YjBhMzk0YWUxNzE2ZjFhNWM5ZmE2ZmI1YTczYSIsInZlcnNpb24iOjF9.nZW0dBdLmh_FgNw6GaITvSJFX-2C_Iku3NanU8Rip7FSiRHozKPAjothdQh9MWQnq158ZZGPPVIjtyIvuTSqCw
- type: loss
value: 0.17403268814086914
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTVjZmFiOGQwZGY1OTU5YWFkNGZjMTlhOGI4NjE3MGI4ZDhkODcxYmJiYTQ3NWNmMWM0ODUyZDI1MThkYTY3ZSIsInZlcnNpb24iOjF9.OYz5BI3Lz8LgjAqVnD6NcrG3UAG0D3wjKJ7G5298RRGaNpb621ycisG_7UYiWixY7e2RJafkfRiplmkdczIFDQ
---
# Distilbert-base-uncased-emotion
## Model description:
[Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model.
[Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.0006792712374590337},
{'label': 'joy', 'score': 0.9959300756454468},
{'label': 'love', 'score': 0.0009452480007894337},
{'label': 'anger', 'score': 0.0018055217806249857},
{'label': 'fear', 'score': 0.00041110432357527316},
{'label': 'surprise', 'score': 0.0002288572577526793}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
## Eval results
```json
{
'test_accuracy': 0.938,
'test_f1': 0.937932884041714,
'test_loss': 0.1472451239824295,
'test_mem_cpu_alloc_delta': 0,
'test_mem_cpu_peaked_delta': 0,
'test_mem_gpu_alloc_delta': 0,
'test_mem_gpu_peaked_delta': 163454464,
'test_runtime': 5.0164,
'test_samples_per_second': 398.69
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) |
777 | bhadresh-savani/distilbert-base-uncased-go-emotion | [
"admiration",
"amusement",
"anger",
"annoyance",
"approval",
"caring",
"confusion",
"curiosity",
"desire",
"disappointment",
"disapproval",
"disgust",
"embarrassment",
"excitement",
"fear",
"gratitude",
"grief",
"joy",
"love",
"nervousness",
"neutral",
"optimism",
"pride",
"realization",
"relief",
"remorse",
"sadness",
"surprise"
] | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- go-emotion
- pytorch
license: apache-2.0
datasets:
- go_emotions
metrics:
- Accuracy
---
# Distilbert-Base-Uncased-Go-Emotion
## Model description:
**Not working fine**
## Training Parameters:
```
Num Epochs = 3
Instantaneous batch size per device = 32
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 1
Total optimization steps = 15831
```
## TrainOutput:
```
'train_loss': 0.105500
```
## Evalution Output:
```
'eval_accuracy_thresh': 0.962023913860321,
'eval_loss': 0.11090277135372162,
```
## Colab Notebook:
[Notebook](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/go_emotion_of_transformers_multilabel_text_classification_v2.ipynb) |
778 | bhadresh-savani/distilbert-base-uncased-sentiment-sst2 | [
"NEGATIVE",
"POSITIVE"
] | ---
language: en
license: apache-2.0
datasets:
- sst2
---
# distilbert-base-uncased-sentiment-sst2
This model will be able to identify positivity or negativity present in the sentence
## Dataset:
The Stanford Sentiment Treebank from GLUE
## Results:
```
***** eval metrics *****
epoch = 3.0
eval_accuracy = 0.9094
eval_loss = 0.3514
eval_runtime = 0:00:03.60
eval_samples = 872
eval_samples_per_second = 242.129
eval_steps_per_second = 30.266
``` |
779 | bhadresh-savani/roberta-base-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
language:
- en
license: apache-2.0
tags:
- text-classification
- emotion
- pytorch
datasets:
- emotion
metrics:
- Accuracy, F1 Score
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
model-index:
- name: bhadresh-savani/roberta-base-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.931
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjg5OTI4ZTlkY2VmZjYzNGEzZGQ3ZjczYzY5YjJmMGVmZDQ4ZWNiYTAyZTJiZjlmMTU2MjE1NTllMWFhYzU0MiIsInZlcnNpb24iOjF9.dc44cEsbu900M2s64GyVIWKPagBzwI-dPlfvh0NGyJFMGKOcypke9P2ary9fBZITrH3UF6lza3sCh7vWYZFHBQ
- type: precision
value: 0.9168321948556312
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2EzYTcxNTExNGU1MmFiZjE3NGE5MDIyMDU2M2U3OGExOTdjZDE5YWU2NDhmOTJlYWMzY2NkN2U5MmRmZTE0MiIsInZlcnNpb24iOjF9.4U7vJ3ALdUUxySMhVeb4Qa1tSp3wphSIZkRYNMujz-KrOZW8kkcmCde3ioStBg3Qqyf1powYd88uk1R7DuWRBA
- type: precision
value: 0.931
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjhmZGRlYWE5ZTAzMmJiMzlmMWZiM2VlYjdiNzI0NjVmN2M2YzcxM2EzYTg0OTFiZTE1MjVmNzE5NGEzYTg2ZCIsInZlcnNpb24iOjF9.8eCHAK0rlZWnhBNQdh9kcuAeItmDUAgK3KkZ7eC-GyYhi4HT5dZiS6btcC5EjkYVOS4czcjzqxfVz4PuZgtLDQ
- type: precision
value: 0.9357445689014415
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhhZTdkNzYzMjhjZjc4MTAxNWZiYjgzMjhhNjRiZWRmYjc5YTA0NTQ1MzllMTYxMTVkMDk4OTE0ZGEyMTNhMiIsInZlcnNpb24iOjF9.YIZfj2Eo1nMX2GVSfqJy-Cp7VBubfUh2LuOnU60sG5Lci8FdlNbAanS1IzAyxU3U29lqiTasxfS_yrwAj5cmBQ
- type: recall
value: 0.8743657671177089
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2Y2YTcyNzMwYzZiMmM1Yzc4YWZhNDM3ZDQyMjI1NWZhMjQyNmU5NTA0YmE2ZDBiZmY1MmUyZWRlMjRhMjFmYSIsInZlcnNpb24iOjF9.XKlFy_Cx4T4l7Otd8aAwWcI-fJ_dJ6V1Kp3uZm6OWjwCb1Do6mSdPFfwiMeBZZyfEIsNBnguegssZvHsOfTSAQ
- type: recall
value: 0.931
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgzN2JkNzAzZDRjNjJmZjNkY2RmYzVkMWEzYTMzZDU4NzJlYzBmOWE4MTU0MGU0MTJhM2JjZDdjODhlZDExOCIsInZlcnNpb24iOjF9.9tSVB4yNBdFXpH3equwo1ZaEnVUktO6lm93UEJ-luKhxo6wgS54OLjgDq7IpJYwa3lvYyjy-sxzQEe9ri31WAg
- type: recall
value: 0.931
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGVhZTIyMmVmOTU1YWNjMmZiZjNmOTNlNzlhZTk3NjhlZmMwZGFkZWQxZTlhZWUwZGQyN2JhOWQyNWQ3MTVhOCIsInZlcnNpb24iOjF9.2odv2fK7zH0_S_7wC3obONzjxOipDdjWvddhnGdMnrIN6CiZwLp7XgizpqcWbwAQ_9YJwjC-6wXpbq2jTvN0Bw
- type: f1
value: 0.8821236522209227
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDI0YTUxOTA2M2ZjNGM1OTJlZDAzZTAxNTg4YjY3OWNmMjNmMTk0YWRjZTE2Y2ZmYWI1ZmU3ZmJmNzNjMjBlOCIsInZlcnNpb24iOjF9.P5-TbuEUrCtX9H7F-tKn8LI1RBPhoJwjJm_l853WTSzdLioThAtIK5HBG0xgXT2uB0Q8v94qH2b8cz1j_WonDg
- type: f1
value: 0.931
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjNmNDgyMmFjODYwNjcwOTJiOGM2N2YwYjUyMDk5Yjk2Y2I3NmFmZGFhYjU0NGM2OGUwZmRjNjcxYTU3YzgzNSIsInZlcnNpb24iOjF9.2ZoRJwQWVIcl_Ykxce1MnZ3mSxBGxGeNYFPxt9mivo9yTi3gUE7ua6JRpVEOnOUbevlWxVkUUNnmOPFqBN1sCQ
- type: f1
value: 0.9300782840205046
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGE1OTcxNmNmMjQ3ZDAzYzk0N2Q1MGFjM2VhNWMyYmRjY2E3ZThjODExOTNlNWMxYzdlMWM2MDBiMTZhY2M2OSIsInZlcnNpb24iOjF9.r63SEArCiFB5m0ccV2q_t5uSOtjVnWdz4PfvCYUchm0JlrRC9YAm5oWKeO419wdyFY4rZFe014yv7sRcV-CgBQ
- type: loss
value: 0.15155883133411407
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2M4MmVlNjAzZjhiMWJlNWQxMDg5ZTRiYjFlZGYyMGMyYzU4M2IwY2E1M2E2MzA5NmU5ZjgwZTZmMDI5YjgzMyIsInZlcnNpb24iOjF9.kjgFJohkTxLKtzHJDlBvd6qolGQDSZLbrDE7C07xNGmarhTLc_A3MmLeC4MmQGOl1DxfnHflImIkdqPylyylDA
---
# robert-base-emotion
## Model description:
[roberta](https://arxiv.org/abs/1907.11692) is Bert with better hyperparameter choices so they said it's Robustly optimized Bert during pretraining.
[roberta-base](https://huggingface.co/roberta-base) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/roberta-base-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.002281982684507966},
{'label': 'joy', 'score': 0.9726489186286926},
{'label': 'love', 'score': 0.021365027874708176},
{'label': 'anger', 'score': 0.0026395076420158148},
{'label': 'fear', 'score': 0.0007162453257478774},
{'label': 'surprise', 'score': 0.0003483477921690792}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
follow the above notebook by changing the model name to roberta
## Eval results
```json
{
'test_accuracy': 0.9395,
'test_f1': 0.9397328860104454,
'test_loss': 0.14367154240608215,
'test_runtime': 10.2229,
'test_samples_per_second': 195.639,
'test_steps_per_second': 3.13
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) |
780 | bioformers/bioformer-8L-mnli | [
"contradiction",
"entailment",
"neutral"
] | [bioformer-cased-v1.0](https://huggingface.co/bioformers/bioformer-cased-v1.0) fined-tuned on the [MNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset for 2 epochs.
The fine-tuning process was performed on two NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=16
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.803973
## Speed
In our experiments, the inference speed of Bioformer is 3x as fast as BERT-base/BioBERT/PubMedBERT, and is 40% faster than DistilBERT.
## More information
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data. (source: https://huggingface.co/datasets/glue) |
781 | bioformers/bioformer-8L-qnli | [
"entailment",
"not_entailment"
] | ---
license: apache-2.0
language:
- en
---
[bioformer-8L](https://huggingface.co/bioformers/bioformer-8L) fined-tuned on the [QNLI](https://huggingface.co/datasets/glue) dataset for 2 epochs.
The fine-tuning process was performed on two NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=16
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.883397
## More information
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.
(source: https://paperswithcode.com/dataset/qnli)
Original GLUE paper: https://arxiv.org/abs/1804.07461 |
782 | bipin/malayalam-news-classifier | [
"business",
"entertainment",
"sports"
] | ---
license: mit
tags:
- text-classification
- roberta
- malayalam
- pytorch
widget:
- text: "2032 ഒളിമ്പിക്സിന് ബ്രിസ്ബെയ്ന് വേദിയാകും; ഗെയിംസിന് വേദിയാകുന്ന മൂന്നാമത്തെ ഓസ്ട്രേലിയന് നഗരം"
---
## Malayalam news classifier
### Overview
This model is trained on top of [MalayalamBert](https://huggingface.co/eliasedwin7/MalayalamBERT) for the task of classifying malayalam news headlines. Presently, the following news categories are supported:
* Business
* Sports
* Entertainment
### Dataset
The dataset used for training this model can be found [here](https://www.kaggle.com/disisbig/malyalam-news-dataset).
### Using the model with HF pipeline
```python
from transformers import pipeline
news_headline = "ക്രിപ്റ്റോ ഇടപാടുകളുടെ വിവരങ്ങൾ ആവശ്യപ്പെട്ട് ആദായനികുതി വകുപ്പ് നോട്ടീസയച്ചു"
model = pipeline(task="text-classification", model="bipin/malayalam-news-classifier")
model(news_headline)
# Output
# [{'label': 'business', 'score': 0.9979357123374939}]
```
### Contact
For feedback and questions, feel free to contact via twitter [@bkrish_](https://twitter.com/bkrish_) |
783 | bitmorse/autonlp-ks-530615016 | [
"canceled",
"failed",
"live",
"successful"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bitmorse/autonlp-data-ks
co2_eq_emissions: 2.2247356264808964
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 530615016
- CO2 Emissions (in grams): 2.2247356264808964
## Validation Metrics
- Loss: 0.7859578132629395
- Accuracy: 0.676854818831649
- Macro F1: 0.3297126297995653
- Micro F1: 0.676854818831649
- Weighted F1: 0.6429522696884535
- Macro Precision: 0.33152557743856437
- Micro Precision: 0.676854818831649
- Weighted Precision: 0.6276125515413322
- Macro Recall: 0.33784302289888885
- Micro Recall: 0.676854818831649
- Weighted Recall: 0.676854818831649
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bitmorse/autonlp-ks-530615016
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
784 | biu-nlp/superpal | [
"aligned",
"not_aligned"
] | ---
widget:
- text: "Prime Minister Hun Sen insisted that talks take place in Cambodia. </s><s> Cambodian leader Hun Sen rejected opposition parties' demands for talks outside the country."
---
# SuperPAL model
Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline
Ori Ernst, Ori Shapira, Ramakanth Pasunuru, Michael Lepioshkin, Jacob Goldberger, Mohit Bansal, Ido Dagan, 2021. [PDF](https://arxiv.org/pdf/2009.00590)
**How to use?**
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("biu-nlp/superpal")
model = AutoModelForSequenceClassification.from_pretrained("biu-nlp/superpal")
```
The original repo is [here](https://github.com/oriern/SuperPAL).
If you find our work useful, please cite the paper as:
```python
@inproceedings{ernst-etal-2021-summary,
title = "Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline",
author = "Ernst, Ori and Shapira, Ori and Pasunuru, Ramakanth and Lepioshkin, Michael and Goldberger, Jacob and Bansal, Mohit and Dagan, Ido",
booktitle = "Proceedings of the 25th Conference on Computational Natural Language Learning",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.conll-1.25",
pages = "310--322"
}
``` |
785 | blackbird/alberta-base-mnli-v1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | |
786 | blackbird/bert-base-uncased-MNLI-v1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | BERT based model finetuned on MNLI with our custom training routine.
Yields 60% accuraqcy on adversarial HANS dataset. |
787 | blanchefort/rubert-base-cased-sentiment-med | [
"NEUTRAL",
"POSITIVE",
"NEGATIVE"
] | ---
language:
- ru
tags:
- sentiment
- text-classification
---
# RuBERT for Sentiment Analysis of Medical Reviews
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on corpus of medical reviews.
## Labels
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-med')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-med', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[Отзывы о медучреждениях](https://github.com/blanchefort/datasets/tree/master/medical_comments)**
> Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта prodoctorov.ru
|
789 | blanchefort/rubert-base-cased-sentiment-rurewiews | [
"NEUTRAL",
"POSITIVE",
"NEGATIVE"
] | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuReviews
---
# RuBERT for Sentiment Analysis of Product Reviews
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuReviews](https://github.com/sismetanin/rureviews).
## Labels
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-rurewiews')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-rurewiews', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[RuReviews](https://github.com/sismetanin/rureviews)**
> RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian.
|
790 | blanchefort/rubert-base-cased-sentiment-rusentiment | [
"NEUTRAL",
"POSITIVE",
"NEGATIVE"
] | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuSentiment
---
# RuBERT for Sentiment Analysis
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/).
## Labels
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-rusentiment')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-rusentiment', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/)**
> A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018. |
791 | blanchefort/rubert-base-cased-sentiment | [
"NEGATIVE",
"NEUTRAL",
"POSITIVE"
] | ---
language:
- ru
tags:
- sentiment
- text-classification
---
# RuBERT for Sentiment Analysis
Short Russian texts sentiment classification
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on aggregated corpus of 351.797 texts.
## Labels
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Datasets used for model training
**[RuTweetCorp](https://study.mokoron.com/)**
> Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора //Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.
**[RuReviews](https://github.com/sismetanin/rureviews)**
> RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian.
**[RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/)**
> A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018.
**[Отзывы о медучреждениях](https://github.com/blanchefort/datasets/tree/master/medical_comments)**
> Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта prodoctorov.ru |
792 | blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-1
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-1
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6660
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8471 | 0.58 |
| No log | 2.0 | 114 | 0.8450 | 0.58 |
| No log | 3.0 | 171 | 0.7846 | 0.58 |
| No log | 4.0 | 228 | 0.8649 | 0.58 |
| No log | 5.0 | 285 | 0.7220 | 0.68 |
| No log | 6.0 | 342 | 0.7395 | 0.66 |
| No log | 7.0 | 399 | 0.7198 | 0.72 |
| No log | 8.0 | 456 | 0.6417 | 0.72 |
| 0.7082 | 9.0 | 513 | 0.6265 | 0.74 |
| 0.7082 | 10.0 | 570 | 0.6660 | 0.7 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
|
793 | blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.54
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0005
- Accuracy: 0.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 1.3510 | 0.54 |
| No log | 2.0 | 114 | 0.9606 | 0.54 |
| No log | 3.0 | 171 | 0.9693 | 0.54 |
| No log | 4.0 | 228 | 1.0445 | 0.54 |
| No log | 5.0 | 285 | 1.0005 | 0.54 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
794 | blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.72
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6748
- Accuracy: 0.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8396 | 0.58 |
| No log | 2.0 | 114 | 0.8608 | 0.58 |
| No log | 3.0 | 171 | 0.7642 | 0.68 |
| No log | 4.0 | 228 | 0.8196 | 0.64 |
| No log | 5.0 | 285 | 0.6477 | 0.72 |
| No log | 6.0 | 342 | 0.6861 | 0.72 |
| No log | 7.0 | 399 | 0.6735 | 0.74 |
| No log | 8.0 | 456 | 0.6516 | 0.72 |
| 0.6526 | 9.0 | 513 | 0.6707 | 0.72 |
| 0.6526 | 10.0 | 570 | 0.6748 | 0.72 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
|
795 | blizrys/biobert-base-cased-v1.1-finetuned-pubmedqa | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: biobert-base-cased-v1.1-finetuned-pubmedqa
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.1-finetuned-pubmedqa
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3182
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8591 | 0.58 |
| No log | 2.0 | 114 | 0.9120 | 0.58 |
| No log | 3.0 | 171 | 0.8159 | 0.62 |
| No log | 4.0 | 228 | 1.1651 | 0.54 |
| No log | 5.0 | 285 | 1.2350 | 0.6 |
| No log | 6.0 | 342 | 1.5563 | 0.68 |
| No log | 7.0 | 399 | 2.0233 | 0.58 |
| No log | 8.0 | 456 | 2.2054 | 0.5 |
| 0.4463 | 9.0 | 513 | 2.2434 | 0.5 |
| 0.4463 | 10.0 | 570 | 2.3182 | 0.5 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
796 | blizrys/biobert-v1.1-finetuned-pubmedqa | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: biobert-v1.1-finetuned-pubmedqa
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-v1.1-finetuned-pubmedqa
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7737
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8810 | 0.56 |
| No log | 2.0 | 114 | 0.8139 | 0.62 |
| No log | 3.0 | 171 | 0.7963 | 0.68 |
| No log | 4.0 | 228 | 0.7709 | 0.66 |
| No log | 5.0 | 285 | 0.7931 | 0.64 |
| No log | 6.0 | 342 | 0.7420 | 0.7 |
| No log | 7.0 | 399 | 0.7654 | 0.7 |
| No log | 8.0 | 456 | 0.7756 | 0.68 |
| 0.5849 | 9.0 | 513 | 0.7605 | 0.68 |
| 0.5849 | 10.0 | 570 | 0.7737 | 0.7 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
798 | blizrys/distilbert-base-uncased-finetuned-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8205807437595517
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6753
- Accuracy: 0.8206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.5146 | 1.0 | 24544 | 0.4925 | 0.8049 |
| 0.4093 | 2.0 | 49088 | 0.5090 | 0.8164 |
| 0.3122 | 3.0 | 73632 | 0.5299 | 0.8185 |
| 0.2286 | 4.0 | 98176 | 0.6753 | 0.8206 |
| 0.182 | 5.0 | 122720 | 0.8372 | 0.8195 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
799 | bobo/bobo_classification_function | [
"NOT_RITNING",
"RITNING"
] | |
800 | bowipawan/bert-sentimental | [
"negative",
"neutral",
"positive"
] | For studying only |
801 | world-wide/sent-sci-irrelevance | [
"False",
"True"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bozelosp/autonlp-data-sci-relevance
co2_eq_emissions: 3.667033499762825
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 33199029
- CO2 Emissions (in grams): 3.667033499762825
## Validation Metrics
- Loss: 0.32653310894966125
- Accuracy: 0.9133333333333333
- Precision: 0.9005847953216374
- Recall: 0.9447852760736196
- AUC: 0.9532488468944517
- F1: 0.9221556886227544
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bozelosp/autonlp-sci-relevance-33199029
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bozelosp/autonlp-sci-relevance-33199029", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bozelosp/autonlp-sci-relevance-33199029", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
802 | bshlgrs/autonlp-classification-9522090 | [
"No",
"Unsure",
"Yes"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bshlgrs/autonlp-data-classification
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 9522090
## Validation Metrics
- Loss: 0.3541755676269531
- Accuracy: 0.8759671179883946
- Macro F1: 0.5330133182738012
- Micro F1: 0.8759671179883946
- Weighted F1: 0.8482773065757196
- Macro Precision: 0.537738108882869
- Micro Precision: 0.8759671179883946
- Weighted Precision: 0.8241048710814852
- Macro Recall: 0.5316621214820499
- Micro Recall: 0.8759671179883946
- Weighted Recall: 0.8759671179883946
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-classification-9522090
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-classification-9522090", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-classification-9522090", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
803 | bshlgrs/autonlp-classification_with_all_labellers-9532137 | [
"No",
"Unsure",
"Yes"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bshlgrs/autonlp-data-classification_with_all_labellers
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 9532137
## Validation Metrics
- Loss: 0.34556105732917786
- Accuracy: 0.8749890724713699
- Macro F1: 0.5243623959669343
- Micro F1: 0.8749890724713699
- Weighted F1: 0.8638030768409057
- Macro Precision: 0.5016762404900895
- Micro Precision: 0.8749890724713699
- Weighted Precision: 0.8547962562614184
- Macro Recall: 0.5529674694200845
- Micro Recall: 0.8749890724713699
- Weighted Recall: 0.8749890724713699
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-classification_with_all_labellers-9532137
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
804 | bshlgrs/autonlp-old-data-trained-10022181 | [
"No",
"Unsure",
"Yes"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bshlgrs/autonlp-data-old-data-trained
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 10022181
## Validation Metrics
- Loss: 0.369505375623703
- Accuracy: 0.8706206896551724
- Macro F1: 0.5410226656476808
- Micro F1: 0.8706206896551724
- Weighted F1: 0.8515634683886795
- Macro Precision: 0.5159711665622992
- Micro Precision: 0.8706206896551724
- Weighted Precision: 0.8346991124101657
- Macro Recall: 0.5711653346601209
- Micro Recall: 0.8706206896551724
- Weighted Recall: 0.8706206896551724
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-old-data-trained-10022181
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-old-data-trained-10022181", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-old-data-trained-10022181", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
805 | bsingh/roberta_goEmotion | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
language: en
tags:
- text-classification
- pytorch
- roberta
- emotions
datasets:
- go_emotions
license: mit
widget:
- text: "I am not feeling well today."
---
## This model is trained for GoEmotions dataset which contains labeled 58k Reddit comments with 28 emotions
- admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral
## Training details:
- The training script is provided here: https://github.com/bsinghpratap/roberta_train_goEmotion
- Please feel free to start an issue in the repo if you have trouble running the model and I would try to respond as soon as possible.
- The model works well on most of the emotions except: 'desire', 'disgust', 'embarrassment', 'excitement', 'fear', 'grief', 'nervousness', 'pride', 'relief', 'remorse', 'surprise']
- I'll try to fine-tune the model further and update here if RoBERTa achieves a better performance.
- Each text datapoint can have more than 1 label. Most of the training set had 1 label: Counter({1: 36308, 2: 6541, 3: 532, 4: 28, 5: 1}). So currently I just used the first label for each of the datapoint. Not ideal but it does a decent job.
## Model Performance
============================================================<br>
Emotion: admiration<br>
============================================================<br>
GoEmotions Paper: 0.65<br>
RoBERTa: 0.62<br>
Support: 504<br>
============================================================<br>
Emotion: amusement<br>
============================================================<br>
GoEmotions Paper: 0.80<br>
RoBERTa: 0.78<br>
Support: 252<br>
============================================================<br>
Emotion: anger<br>
============================================================<br>
GoEmotions Paper: 0.47<br>
RoBERTa: 0.44<br>
Support: 197<br>
============================================================<br>
Emotion: annoyance<br>
============================================================<br>
GoEmotions Paper: 0.34<br>
RoBERTa: 0.22<br>
Support: 286<br>
============================================================<br>
Emotion: approval<br>
============================================================<br>
GoEmotions Paper: 0.36<br>
RoBERTa: 0.31<br>
Support: 318<br>
============================================================<br>
Emotion: caring<br>
============================================================<br>
GoEmotions Paper: 0.39<br>
RoBERTa: 0.24<br>
Support: 114<br>
============================================================<br>
Emotion: confusion<br>
============================================================<br>
GoEmotions Paper: 0.37<br>
RoBERTa: 0.29<br>
Support: 139<br>
============================================================<br>
Emotion: curiosity<br>
============================================================<br>
GoEmotions Paper: 0.54<br>
RoBERTa: 0.48<br>
Support: 233<br>
============================================================<br>
Emotion: disappointment<br>
============================================================<br>
GoEmotions Paper: 0.28<br>
RoBERTa: 0.18<br>
Support: 127<br>
============================================================<br>
Emotion: disapproval<br>
============================================================<br>
GoEmotions Paper: 0.39<br>
RoBERTa: 0.26<br>
Support: 220<br>
============================================================<br>
Emotion: gratitude<br>
============================================================<br>
GoEmotions Paper: 0.86<br>
RoBERTa: 0.84<br>
Support: 288<br>
============================================================<br>
Emotion: joy<br>
============================================================<br>
GoEmotions Paper: 0.51<br>
RoBERTa: 0.47<br>
Support: 116<br>
============================================================<br>
Emotion: love<br>
============================================================<br>
GoEmotions Paper: 0.78<br>
RoBERTa: 0.68<br>
Support: 169<br>
============================================================<br>
Emotion: neutral<br>
============================================================<br>
GoEmotions Paper: 0.68<br>
RoBERTa: 0.61<br>
Support: 1606<br>
============================================================<br>
Emotion: optimism<br>
============================================================<br>
GoEmotions Paper: 0.51<br>
RoBERTa: 0.52<br>
Support: 120<br>
============================================================<br>
Emotion: realization<br>
============================================================<br>
GoEmotions Paper: 0.21<br>
RoBERTa: 0.15<br>
Support: 109<br>
============================================================<br>
Emotion: sadness<br>
============================================================<br>
GoEmotions Paper: 0.49<br>
RoBERTa: 0.42<br>
Support: 108 |
806 | DATEXIS/CORe-clinical-diagnosis-prediction | [
"003",
"0030",
"0031",
"0038",
"0039",
"004",
"0041",
"0048",
"0049",
"005",
"0051",
"0058",
"0059",
"007",
"0071",
"0074",
"008",
"0080",
"0084",
"0085",
"0086",
"0088",
"009",
"0090",
"0091",
"0092",
"0093",
"010",
"0108",
"011",
"0112",
"0113",
"0116",
"0118",
"0119",
"012",
"0120",
"0121",
"013",
"0130",
"0132",
"0133",
"0135",
"014",
"0140",
"0148",
"015",
"0150",
"018",
"0180",
"0188",
"0189",
"021",
"0218",
"023",
"0239",
"027",
"0270",
"0272",
"0279",
"030",
"0309",
"031",
"0310",
"0311",
"0312",
"0318",
"0319",
"032",
"0328",
"0329",
"033",
"0338",
"034",
"0340",
"035",
"036",
"0360",
"0362",
"0364",
"038",
"0380",
"0381",
"0382",
"0383",
"0384",
"0388",
"0389",
"039",
"0391",
"0392",
"0398",
"040",
"0400",
"0408",
"041",
"0410",
"0411",
"0412",
"0413",
"0414",
"0415",
"0416",
"0417",
"0418",
"0419",
"042",
"045",
"0459",
"046",
"0463",
"0467",
"047",
"0470",
"0478",
"0479",
"048",
"049",
"0490",
"0491",
"0498",
"0499",
"052",
"0520",
"0521",
"0527",
"0529",
"053",
"0530",
"0531",
"0532",
"0537",
"0539",
"054",
"0540",
"0541",
"0542",
"0543",
"0544",
"0545",
"0547",
"0549",
"057",
"0578",
"0579",
"058",
"0582",
"0588",
"062",
"0622",
"066",
"0664",
"070",
"0700",
"0701",
"0702",
"0703",
"0704",
"0705",
"0707",
"0709",
"075",
"077",
"0778",
"0779",
"078",
"0780",
"0781",
"0785",
"0788",
"079",
"0790",
"0793",
"0794",
"0795",
"0796",
"0798",
"0799",
"082",
"0824",
"083",
"0839",
"084",
"0840",
"0844",
"0846",
"0849",
"085",
"0859",
"086",
"0860",
"088",
"0880",
"0888",
"091",
"0912",
"0915",
"0918",
"0919",
"093",
"0931",
"094",
"0940",
"0949",
"096",
"097",
"0970",
"0971",
"0979",
"098",
"0980",
"099",
"0993",
"1",
"10",
"1000",
"10001249",
"1019",
"10th",
"110",
"1100",
"1101",
"1103",
"1104",
"1105",
"1106",
"1108",
"1109",
"111",
"1110",
"1118",
"1119",
"112",
"1120",
"1121",
"1122",
"1123",
"1124",
"1125",
"1128",
"1129",
"114",
"1140",
"1149",
"115",
"1150",
"1151",
"1159",
"116",
"1160",
"117",
"1173",
"1174",
"1175",
"1177",
"1179",
"118",
"11th",
"120",
"1208",
"1209",
"121",
"1211",
"122",
"1228",
"123",
"1231",
"124",
"1249",
"125",
"1250",
"12501499",
"1251",
"127",
"1270",
"1272",
"1273",
"130",
"1300",
"1307",
"1308",
"1309",
"131",
"1310",
"132",
"1320",
"1329",
"133",
"1330",
"134",
"1348",
"135",
"136",
"1361",
"1363",
"1369",
"137",
"1370",
"1373",
"138",
"139",
"1390",
"1398",
"140",
"1400",
"1401",
"1409",
"141",
"1410",
"1414",
"1418",
"1419",
"142",
"1420",
"1429",
"143",
"1430",
"1431",
"144",
"1440",
"1448",
"1449",
"145",
"1450",
"1452",
"1453",
"1455",
"1458",
"146",
"1460",
"1461",
"1463",
"1464",
"1467",
"1468",
"1469",
"147",
"1471",
"1478",
"1479",
"148",
"1481",
"1488",
"1489",
"149",
"1490",
"1498",
"1499",
"150",
"1500",
"15001749",
"1501",
"1503",
"1504",
"1505",
"1508",
"1509",
"151",
"1510",
"1511",
"1512",
"1513",
"1514",
"1515",
"1516",
"1518",
"1519",
"152",
"1520",
"1521",
"1522",
"1528",
"1529",
"153",
"1530",
"1531",
"1532",
"1533",
"1534",
"1535",
"1536",
"1537",
"1538",
"1539",
"154",
"1540",
"1541",
"1542",
"1543",
"1548",
"155",
"1550",
"1551",
"1552",
"156",
"1560",
"1561",
"1562",
"1568",
"1569",
"157",
"1570",
"1571",
"1572",
"1573",
"1574",
"1578",
"1579",
"158",
"1580",
"1588",
"1589",
"159",
"1598",
"1599",
"160",
"1602",
"1603",
"1608",
"1609",
"161",
"1610",
"1611",
"1612",
"1613",
"1618",
"1619",
"162",
"1620",
"1622",
"1623",
"1624",
"1625",
"1628",
"1629",
"163",
"1630",
"1638",
"1639",
"164",
"1640",
"1641",
"1642",
"1643",
"1648",
"1649",
"170",
"1700",
"1702",
"1703",
"1707",
"171",
"1710",
"1712",
"1713",
"1714",
"1715",
"1716",
"1717",
"1718",
"172",
"1720",
"1723",
"1724",
"1725",
"1726",
"1727",
"1728",
"1729",
"173",
"1730",
"1731",
"1732",
"1733",
"1734",
"1735",
"1736",
"1737",
"1738",
"1739",
"174",
"1743",
"1744",
"1745",
"1748",
"1749",
"175",
"1750",
"17501999",
"1759",
"176",
"1760",
"1761",
"1763",
"1764",
"1765",
"1768",
"1769",
"179",
"180",
"1800",
"1808",
"1809",
"182",
"1820",
"183",
"1830",
"1832",
"1838",
"184",
"1840",
"1844",
"1848",
"185",
"186",
"1869",
"187",
"1874",
"188",
"1880",
"1881",
"1882",
"1883",
"1884",
"1885",
"1888",
"1889",
"189",
"1890",
"1891",
"1892",
"1893",
"1898",
"19",
"190",
"1906",
"191",
"1910",
"1911",
"1912",
"1913",
"1914",
"1915",
"1916",
"1917",
"1918",
"1919",
"192",
"1920",
"1921",
"1922",
"1924",
"193",
"194",
"1940",
"1941",
"1943",
"1945",
"195",
"1950",
"1951",
"1952",
"1953",
"1958",
"196",
"1960",
"1961",
"1962",
"1963",
"1965",
"1966",
"1968",
"1969",
"197",
"1970",
"1971",
"1972",
"1973",
"1974",
"1975",
"1976",
"1977",
"1978",
"198",
"1980",
"1981",
"1982",
"1983",
"1984",
"1985",
"1986",
"1987",
"1988",
"199",
"1990",
"1991",
"1999",
"2",
"200",
"2000",
"20002499",
"2001",
"2002",
"2003",
"2004",
"2005",
"2006",
"2007",
"2008",
"201",
"2014",
"2015",
"2019",
"202",
"2020",
"2021",
"2022",
"2024",
"2025",
"2026",
"2027",
"2028",
"2029",
"203",
"2030",
"2031",
"2038",
"204",
"2040",
"2041",
"2048",
"2049",
"205",
"2050",
"2051",
"2053",
"2059",
"206",
"2060",
"207",
"2072",
"2078",
"208",
"2080",
"2089",
"209",
"2090",
"2091",
"2092",
"2093",
"2094",
"2095",
"2096",
"2097",
"210",
"2101",
"2102",
"2104",
"211",
"2110",
"2111",
"2112",
"2113",
"2114",
"2115",
"2116",
"2117",
"2118",
"2119",
"212",
"2120",
"2121",
"2122",
"2123",
"2125",
"2126",
"2127",
"213",
"2130",
"2132",
"2137",
"214",
"2140",
"2141",
"2142",
"2143",
"2144",
"2148",
"2149",
"215",
"2150",
"2153",
"2154",
"2155",
"2156",
"216",
"2163",
"2165",
"2166",
"2167",
"2169",
"217",
"218",
"2180",
"2181",
"2182",
"2189",
"219",
"2191",
"220",
"221",
"2210",
"2218",
"223",
"2230",
"225",
"2250",
"2251",
"2252",
"2253",
"2254",
"226",
"227",
"2270",
"2271",
"2273",
"228",
"2280",
"2281",
"229",
"2298",
"230",
"2300",
"2301",
"2302",
"2306",
"2308",
"2309",
"231",
"2312",
"232",
"2325",
"2329",
"233",
"2330",
"2331",
"2333",
"2334",
"2337",
"2339",
"235",
"2352",
"2353",
"2354",
"2355",
"2356",
"2357",
"2358",
"236",
"2360",
"2362",
"2367",
"2369",
"237",
"2370",
"2371",
"2373",
"2375",
"2376",
"2377",
"2379",
"238",
"2380",
"2381",
"2382",
"2384",
"2386",
"2387",
"2388",
"239",
"2390",
"2391",
"2392",
"2394",
"2395",
"2396",
"2397",
"2398",
"24",
"240",
"2409",
"241",
"2410",
"2411",
"2419",
"242",
"2420",
"2421",
"2422",
"2423",
"2428",
"2429",
"243",
"244",
"2440",
"2441",
"2442",
"2443",
"2448",
"2449",
"245",
"2452",
"2454",
"2458",
"2459",
"246",
"2462",
"2468",
"2469",
"249",
"2490",
"2491",
"2495",
"2496",
"2498",
"2499",
"250",
"2500",
"2501",
"2502",
"250259",
"2503",
"2504",
"2505",
"2506",
"2507",
"2508",
"2509",
"251",
"2511",
"2512",
"2513",
"2515",
"2518",
"2519",
"252",
"2520",
"2521",
"2526",
"2528",
"253",
"2530",
"2531",
"2532",
"2533",
"2534",
"2535",
"2536",
"2537",
"2538",
"2539",
"254",
"2540",
"2541",
"2548",
"255",
"2550",
"2551",
"2552",
"2553",
"2554",
"2555",
"2558",
"2559",
"256",
"2561",
"2563",
"2564",
"257",
"2571",
"2572",
"258",
"2580",
"2581",
"2588",
"2589",
"259",
"2592",
"2594",
"2598",
"2599",
"260",
"260269",
"261",
"262",
"263",
"2630",
"2631",
"2638",
"2639",
"265",
"2650",
"2651",
"2652",
"266",
"2662",
"2669",
"267",
"268",
"2682",
"2689",
"269",
"2690",
"2692",
"2693",
"2698",
"2699",
"270",
"2700",
"2702",
"270279",
"2704",
"2706",
"2707",
"271",
"2710",
"2713",
"2718",
"272",
"2720",
"2721",
"2722",
"2724",
"2725",
"2726",
"2727",
"2728",
"2729",
"273",
"2730",
"2731",
"2732",
"2733",
"2734",
"2738",
"2739",
"274",
"2740",
"2741",
"2748",
"2749",
"275",
"2750",
"2751",
"2752",
"2753",
"2754",
"2755",
"2758",
"2759",
"276",
"2760",
"2761",
"2762",
"2763",
"2764",
"2765",
"2766",
"2767",
"2768",
"2769",
"277",
"2770",
"2771",
"2773",
"2774",
"2776",
"2777",
"2778",
"2779",
"278",
"2780",
"2781",
"2788",
"279",
"2790",
"2793",
"2794",
"2795",
"2798",
"2799",
"280",
"2800",
"280289",
"2808",
"2809",
"281",
"2810",
"2811",
"2812",
"2813",
"2818",
"2819",
"282",
"2820",
"2821",
"2822",
"2823",
"2824",
"2825",
"2826",
"2827",
"2828",
"2829",
"283",
"2830",
"2831",
"2832",
"2839",
"284",
"2841",
"2842",
"2848",
"2849",
"285",
"2851",
"2852",
"2853",
"2858",
"2859",
"286",
"2860",
"2861",
"2862",
"2863",
"2864",
"2865",
"2866",
"2867",
"2869",
"287",
"2870",
"2871",
"2872",
"2873",
"2874",
"2875",
"2879",
"288",
"2880",
"2881",
"2882",
"2883",
"2884",
"2885",
"2886",
"2888",
"2889",
"289",
"2890",
"2891",
"2893",
"2894",
"2895",
"2897",
"2898",
"2899",
"290",
"2900",
"2901",
"290299",
"2903",
"2904",
"291",
"2910",
"2911",
"2912",
"2913",
"2918",
"292",
"2920",
"2921",
"2928",
"2929",
"293",
"2930",
"2931",
"2938",
"2939",
"294",
"2940",
"2941",
"2942",
"2948",
"2949",
"295",
"2950",
"2951",
"2952",
"2953",
"2954",
"2956",
"2957",
"2958",
"2959",
"296",
"2960",
"2961",
"2962",
"2963",
"2964",
"2965",
"2967",
"2968",
"2969",
"297",
"2971",
"2972",
"2978",
"2979",
"298",
"2980",
"2982",
"2984",
"2989",
"299",
"2990",
"2998",
"2999",
"30",
"300",
"3000",
"3001",
"3002",
"3003",
"300309",
"3004",
"3007",
"3008",
"3009",
"301",
"3010",
"3011",
"3012",
"3013",
"3014",
"3015",
"3017",
"3018",
"3019",
"302",
"3025",
"3028",
"3029",
"303",
"3030",
"3039",
"304",
"3040",
"3041",
"3042",
"3043",
"3044",
"3046",
"3047",
"3048",
"3049",
"305",
"3050",
"3051",
"3052",
"3053",
"3054",
"3055",
"3056",
"3057",
"3058",
"3059",
"306",
"3060",
"3061",
"3062",
"3068",
"3069",
"307",
"3071",
"3072",
"3074",
"3075",
"3076",
"3078",
"3079",
"308",
"3080",
"3081",
"3082",
"3083",
"3089",
"309",
"3090",
"3091",
"3092",
"3093",
"3094",
"3098",
"3099",
"310",
"3100",
"3101",
"3102",
"310319",
"3108",
"3109",
"311",
"312",
"3123",
"3128",
"3129",
"313",
"3132",
"3138",
"314",
"3140",
"315",
"3152",
"3153",
"3154",
"3158",
"3159",
"316",
"317",
"318",
"3180",
"3181",
"3182",
"319",
"320",
"3200",
"3201",
"3202",
"3203",
"320329",
"3207",
"3208",
"3209",
"321",
"3210",
"3212",
"322",
"3220",
"3221",
"3222",
"3229",
"323",
"3234",
"3235",
"3236",
"3237",
"3238",
"3239",
"324",
"3240",
"3241",
"3249",
"325",
"326",
"327",
"3271",
"3272",
"3273",
"3274",
"330",
"3300",
"3301",
"330339",
"3308",
"331",
"3310",
"3311",
"3313",
"3314",
"3315",
"3318",
"3319",
"332",
"3320",
"3321",
"333",
"3330",
"3331",
"3332",
"3334",
"3335",
"3336",
"3337",
"3338",
"3339",
"334",
"3340",
"3341",
"3342",
"3343",
"3344",
"3348",
"3349",
"335",
"3351",
"3352",
"336",
"3360",
"3361",
"3363",
"3368",
"3369",
"337",
"3370",
"3371",
"3372",
"3373",
"3379",
"338",
"3380",
"3381",
"3382",
"3383",
"3384",
"339",
"3390",
"3391",
"3392",
"3393",
"3398",
"340",
"340349",
"341",
"3410",
"3411",
"3412",
"3418",
"3419",
"342",
"3420",
"3421",
"3428",
"3429",
"343",
"3430",
"3431",
"3432",
"3434",
"3438",
"3439",
"344",
"3440",
"3441",
"3442",
"3443",
"3444",
"3445",
"3446",
"3448",
"3449",
"345",
"3450",
"3451",
"3452",
"3453",
"3454",
"3455",
"3457",
"3458",
"3459",
"346",
"3460",
"3462",
"3467",
"3468",
"3469",
"347",
"3470",
"3471",
"348",
"3480",
"3481",
"3482",
"3483",
"3484",
"3485",
"3488",
"3489",
"349",
"3490",
"3491",
"3492",
"3493",
"3498",
"3499",
"350",
"3501",
"3502",
"350359",
"3509",
"351",
"3510",
"3518",
"3519",
"352",
"3522",
"3523",
"3524",
"3526",
"3529",
"353",
"3530",
"3536",
"354",
"3540",
"3541",
"3542",
"3543",
"3545",
"3548",
"3549",
"355",
"3550",
"3551",
"3552",
"3553",
"3555",
"3556",
"3557",
"3558",
"3559",
"356",
"3561",
"3562",
"3568",
"3569",
"357",
"3570",
"3571",
"3572",
"3573",
"3574",
"3575",
"3576",
"3577",
"3578",
"358",
"3580",
"3581",
"3588",
"3589",
"359",
"3590",
"3591",
"3592",
"3593",
"3594",
"3595",
"3597",
"3598",
"3599",
"360",
"3600",
"3601",
"360369",
"3604",
"361",
"3610",
"3612",
"3618",
"3619",
"362",
"3620",
"3621",
"3622",
"3623",
"3624",
"3625",
"3627",
"3628",
"3629",
"363",
"3631",
"3632",
"3636",
"3637",
"364",
"3640",
"3643",
"3644",
"3647",
"3649",
"365",
"3650",
"3651",
"3652",
"3654",
"3655",
"3656",
"3657",
"3658",
"3659",
"366",
"3661",
"3664",
"3668",
"3669",
"367",
"3671",
"3674",
"368",
"3680",
"3681",
"3682",
"3684",
"3685",
"3688",
"3689",
"369",
"3690",
"3691",
"3693",
"3694",
"3696",
"3697",
"3698",
"3699",
"37",
"370",
"3700",
"3702",
"3703",
"370379",
"3708",
"3709",
"371",
"3714",
"3718",
"372",
"3720",
"3721",
"3723",
"3724",
"3727",
"3728",
"3729",
"373",
"3730",
"3731",
"3732",
"3739",
"374",
"3741",
"3742",
"3743",
"3744",
"3745",
"3748",
"3749",
"375",
"3750",
"3751",
"3752",
"3753",
"3755",
"3759",
"376",
"3760",
"3761",
"3763",
"3765",
"3768",
"3769",
"377",
"3770",
"3771",
"3773",
"3774",
"3775",
"3777",
"378",
"3780",
"3781",
"3782",
"3784",
"3785",
"3787",
"3788",
"3789",
"379",
"3790",
"3792",
"3794",
"3795",
"3798",
"3799",
"380",
"3800",
"3801",
"3802",
"380389",
"3804",
"381",
"3810",
"3814",
"382",
"3820",
"3824",
"3829",
"383",
"3830",
"3831",
"3832",
"3839",
"384",
"3840",
"3842",
"385",
"3858",
"386",
"3860",
"3861",
"3862",
"3863",
"3869",
"387",
"3879",
"388",
"3883",
"3884",
"3885",
"3886",
"3887",
"3888",
"389",
"3890",
"3891",
"3892",
"3897",
"3898",
"3899",
"390",
"390399",
"391",
"3910",
"3911",
"3918",
"393",
"394",
"3940",
"3941",
"3942",
"3949",
"395",
"3950",
"3951",
"3952",
"3959",
"396",
"3960",
"3961",
"3962",
"3963",
"3968",
"3969",
"397",
"3970",
"3971",
"398",
"3989",
"400449",
"401",
"4010",
"4011",
"4019",
"402",
"4020",
"4021",
"4029",
"403",
"4030",
"4031",
"4039",
"404",
"4040",
"4041",
"4049",
"405",
"4050",
"4051",
"4059",
"410",
"4100",
"4101",
"4102",
"4103",
"4104",
"4105",
"4106",
"4107",
"4108",
"4109",
"411",
"4110",
"4111",
"4118",
"412",
"413",
"4131",
"4139",
"414",
"4140",
"4141",
"4142",
"4144",
"4148",
"4149",
"415",
"4150",
"4151",
"416",
"4160",
"4162",
"4168",
"4169",
"417",
"4170",
"4171",
"4178",
"4179",
"420",
"4200",
"4209",
"421",
"4210",
"4219",
"422",
"4220",
"4229",
"423",
"4230",
"4231",
"4232",
"4233",
"4238",
"4239",
"424",
"4240",
"4241",
"4242",
"4243",
"4249",
"425",
"4251",
"4253",
"4254",
"4255",
"4257",
"4258",
"4259",
"426",
"4260",
"4261",
"4262",
"4263",
"4264",
"4265",
"4266",
"4267",
"4268",
"4269",
"427",
"4270",
"4271",
"4272",
"4273",
"4274",
"4275",
"4276",
"4278",
"4279",
"428",
"4280",
"4281",
"4282",
"4283",
"4284",
"4289",
"429",
"4290",
"4291",
"4292",
"4293",
"4294",
"4295",
"4296",
"4297",
"4298",
"4299",
"430",
"431",
"432",
"4320",
"4321",
"4329",
"433",
"4330",
"4331",
"4332",
"4333",
"4338",
"4339",
"434",
"4340",
"4341",
"4349",
"435",
"4350",
"4351",
"4352",
"4353",
"4358",
"4359",
"436",
"437",
"4370",
"4371",
"4372",
"4373",
"4374",
"4375",
"4376",
"4377",
"4378",
"4379",
"438",
"4380",
"4381",
"4382",
"4383",
"4384",
"4385",
"4386",
"4387",
"4388",
"4389",
"440",
"4400",
"4401",
"4402",
"4403",
"4404",
"4408",
"4409",
"441",
"4410",
"4411",
"4412",
"4413",
"4414",
"4416",
"4417",
"4419",
"442",
"4420",
"4421",
"4422",
"4423",
"4428",
"443",
"4430",
"4431",
"4432",
"4438",
"4439",
"444",
"4440",
"4441",
"4442",
"4448",
"4449",
"445",
"4450",
"4458",
"446",
"4460",
"4462",
"4464",
"4465",
"4466",
"4467",
"447",
"4470",
"4471",
"4472",
"4473",
"4474",
"4475",
"4476",
"4477",
"4478",
"4479",
"448",
"4480",
"4481",
"4489",
"449",
"450499",
"451",
"4510",
"4511",
"4512",
"4518",
"4519",
"452",
"453",
"4530",
"4531",
"4532",
"4533",
"4534",
"4535",
"4536",
"4537",
"4538",
"4539",
"454",
"4540",
"4541",
"4542",
"4548",
"4549",
"455",
"4550",
"4551",
"4552",
"4553",
"4554",
"4555",
"4556",
"4558",
"4559",
"456",
"4560",
"4561",
"4562",
"4568",
"457",
"4570",
"4571",
"4572",
"4578",
"458",
"4580",
"4581",
"4582",
"4588",
"4589",
"459",
"4590",
"4591",
"4592",
"4598",
"4599",
"461",
"4610",
"4611",
"4612",
"4613",
"4618",
"4619",
"462",
"463",
"464",
"4640",
"4641",
"4643",
"4645",
"465",
"4659",
"466",
"4660",
"4661",
"470",
"471",
"4710",
"4718",
"4719",
"472",
"4720",
"473",
"4730",
"4731",
"4732",
"4733",
"4738",
"4739",
"474",
"4740",
"4741",
"4748",
"4749",
"475",
"477",
"4770",
"4772",
"4778",
"4779",
"478",
"4780",
"4781",
"4782",
"4783",
"4784",
"4785",
"4786",
"4787",
"4789",
"480",
"4801",
"4802",
"4808",
"4809",
"481",
"482",
"4820",
"4821",
"4822",
"4823",
"4824",
"4828",
"4829",
"483",
"4830",
"4838",
"484",
"4841",
"4843",
"4846",
"4847",
"4848",
"485",
"486",
"487",
"4870",
"4871",
"4878",
"488",
"4880",
"4881",
"490",
"491",
"4910",
"4912",
"4918",
"4919",
"492",
"4920",
"4928",
"493",
"4930",
"4932",
"4938",
"4939",
"494",
"4940",
"4941",
"495",
"4957",
"4958",
"4959",
"496",
"500",
"500599",
"500749",
"501",
"502",
"5059",
"506",
"5060",
"507",
"5070",
"5071",
"5078",
"508",
"5080",
"5081",
"5082",
"5088",
"510",
"5100",
"5109",
"511",
"5110",
"5111",
"5118",
"5119",
"512",
"5120",
"5121",
"5122",
"5128",
"513",
"5130",
"5131",
"514",
"515",
"516",
"5160",
"5161",
"5163",
"5164",
"5168",
"5169",
"517",
"5172",
"5173",
"5178",
"518",
"5180",
"5181",
"5183",
"5184",
"5185",
"5186",
"5187",
"5188",
"519",
"5190",
"5191",
"5192",
"5193",
"5194",
"5198",
"5199",
"520",
"5206",
"521",
"5210",
"5218",
"5219",
"522",
"5224",
"5225",
"5226",
"523",
"5231",
"5233",
"5234",
"5235",
"5238",
"5239",
"524",
"5244",
"5246",
"5248",
"525",
"5251",
"5253",
"5254",
"5255",
"5256",
"5257",
"5258",
"5259",
"526",
"5260",
"5262",
"5264",
"5265",
"5268",
"5269",
"527",
"5272",
"5273",
"5275",
"5277",
"5278",
"5279",
"528",
"5280",
"5282",
"5283",
"5285",
"5289",
"529",
"5290",
"5291",
"5293",
"5296",
"5298",
"530",
"5300",
"5301",
"5302",
"5303",
"5304",
"5305",
"5306",
"5307",
"5308",
"5309",
"531",
"5310",
"5311",
"5312",
"5313",
"5314",
"5315",
"5316",
"5317",
"5319",
"532",
"5320",
"5321",
"5322",
"5323",
"5324",
"5325",
"5326",
"5327",
"5329",
"533",
"5330",
"5331",
"5334",
"5337",
"5339",
"534",
"5340",
"5341",
"5343",
"5344",
"5345",
"5349",
"535",
"5350",
"5351",
"5352",
"5353",
"5354",
"5355",
"5356",
"5357",
"536",
"5361",
"5362",
"5363",
"5364",
"5368",
"5369",
"537",
"5370",
"5371",
"5373",
"5374",
"5378",
"5379",
"538",
"539",
"5398",
"540",
"5400",
"5401",
"5409",
"541",
"542",
"543",
"5439",
"550",
"5500",
"5501",
"5509",
"551",
"5510",
"5511",
"5512",
"5513",
"5518",
"552",
"5520",
"5521",
"5522",
"5523",
"5528",
"5529",
"553",
"5530",
"5531",
"5532",
"5533",
"5538",
"5539",
"555",
"5550",
"5551",
"5552",
"5559",
"556",
"5560",
"5561",
"5562",
"5563",
"5564",
"5565",
"5566",
"5568",
"5569",
"557",
"5570",
"5571",
"5579",
"558",
"5581",
"5582",
"5583",
"5584",
"5589",
"560",
"5600",
"5601",
"5602",
"5603",
"5608",
"5609",
"562",
"5620",
"5621",
"564",
"5640",
"5641",
"5642",
"5643",
"5644",
"5646",
"5647",
"5648",
"565",
"5650",
"5651",
"566",
"567",
"5670",
"5671",
"5672",
"5673",
"5678",
"5679",
"568",
"5680",
"5688",
"569",
"5690",
"5691",
"5692",
"5693",
"5694",
"5695",
"5696",
"5697",
"5698",
"5699",
"570",
"571",
"5710",
"5711",
"5712",
"5713",
"5714",
"5715",
"5716",
"5718",
"5719",
"572",
"5720",
"5721",
"5722",
"5723",
"5724",
"5728",
"573",
"5730",
"5731",
"5733",
"5734",
"5735",
"5738",
"5739",
"574",
"5740",
"5741",
"5742",
"5743",
"5744",
"5745",
"5746",
"5747",
"5748",
"5749",
"575",
"5750",
"5751",
"5752",
"5753",
"5754",
"5755",
"5756",
"5758",
"5759",
"576",
"5760",
"5761",
"5762",
"5763",
"5764",
"5768",
"5769",
"577",
"5770",
"5771",
"5772",
"5778",
"5779",
"578",
"5780",
"5781",
"5789",
"579",
"5790",
"5793",
"5798",
"5799",
"580",
"5800",
"5804",
"5808",
"5809",
"581",
"5810",
"5811",
"5812",
"5818",
"5819",
"582",
"5820",
"5821",
"5822",
"5824",
"5828",
"5829",
"583",
"5830",
"5831",
"5832",
"5834",
"5838",
"5839",
"584",
"5845",
"5846",
"5847",
"5848",
"5849",
"585",
"5851",
"5852",
"5853",
"5854",
"5855",
"5856",
"5859",
"586",
"587",
"588",
"5880",
"5881",
"5888",
"589",
"5890",
"590",
"5900",
"5901",
"5902",
"5908",
"5909",
"591",
"592",
"5920",
"5921",
"5929",
"593",
"5931",
"5932",
"5933",
"5934",
"5935",
"5937",
"5938",
"5939",
"594",
"5940",
"5941",
"5942",
"5949",
"595",
"5950",
"5951",
"5952",
"5958",
"5959",
"596",
"5960",
"5961",
"5963",
"5964",
"5965",
"5966",
"5967",
"5968",
"5969",
"597",
"5970",
"5978",
"598",
"5980",
"5981",
"5982",
"5988",
"5989",
"599",
"5990",
"5991",
"5993",
"5994",
"5995",
"5996",
"5997",
"5998",
"5q",
"6",
"600",
"6000",
"6001",
"6002",
"600699",
"6009",
"601",
"6010",
"6011",
"6012",
"6018",
"6019",
"602",
"6021",
"6023",
"6028",
"603",
"6031",
"6038",
"6039",
"604",
"6040",
"6049",
"605",
"607",
"6071",
"6072",
"6073",
"6078",
"6079",
"608",
"6080",
"6082",
"6084",
"6088",
"6089",
"610",
"6101",
"611",
"6110",
"6111",
"6116",
"6117",
"6118",
"6119",
"614",
"6140",
"6141",
"6142",
"6143",
"6144",
"6145",
"6146",
"6149",
"615",
"6150",
"6151",
"6159",
"616",
"6160",
"6161",
"6162",
"6164",
"6165",
"6168",
"6169",
"617",
"6170",
"6171",
"6172",
"6173",
"6175",
"6178",
"6179",
"618",
"6180",
"6181",
"6182",
"6183",
"6184",
"6185",
"6188",
"619",
"6190",
"6191",
"6192",
"6198",
"620",
"6200",
"6201",
"6202",
"6203",
"6205",
"6208",
"6209",
"621",
"6210",
"6212",
"6213",
"6214",
"6218",
"622",
"6221",
"623",
"6232",
"6235",
"6238",
"624",
"6240",
"6248",
"6249",
"625",
"6251",
"6253",
"6254",
"6255",
"6256",
"6257",
"6258",
"6259",
"626",
"6260",
"6261",
"6262",
"6264",
"6266",
"6268",
"6269",
"627",
"6270",
"6271",
"6272",
"6273",
"6274",
"6278",
"6279",
"628",
"6289",
"629",
"6298",
"632",
"633",
"6331",
"6332",
"6338",
"634",
"6340",
"6341",
"6342",
"6345",
"6349",
"635",
"6350",
"6351",
"6352",
"6355",
"6357",
"6359",
"639",
"6390",
"6391",
"6392",
"6396",
"6398",
"641",
"6410",
"6411",
"6412",
"6413",
"642",
"6420",
"6421",
"6422",
"6423",
"6424",
"6425",
"6426",
"6427",
"6429",
"643",
"6430",
"6431",
"644",
"6440",
"6442",
"645",
"6451",
"646",
"6462",
"6465",
"6466",
"6467",
"6468",
"647",
"6476",
"6478",
"6479",
"648",
"6480",
"6481",
"6482",
"6483",
"6484",
"6485",
"6486",
"6488",
"6489",
"649",
"6490",
"6491",
"6493",
"6494",
"651",
"6510",
"652",
"6522",
"6525",
"6526",
"654",
"6540",
"6541",
"6542",
"6544",
"6545",
"655",
"6557",
"6558",
"656",
"6561",
"6564",
"6565",
"6566",
"6567",
"657",
"6570",
"658",
"6580",
"6582",
"659",
"6592",
"6595",
"6596",
"6597",
"660",
"6600",
"6602",
"661",
"6611",
"6613",
"663",
"6633",
"664",
"6640",
"6641",
"665",
"6651",
"6652",
"6653",
"6654",
"6655",
"6656",
"6657",
"666",
"6660",
"6661",
"6662",
"6663",
"668",
"6681",
"6682",
"669",
"6691",
"6692",
"6693",
"6694",
"670",
"6700",
"6701",
"671",
"6715",
"672",
"6720",
"673",
"6731",
"6732",
"6733",
"674",
"6740",
"6741",
"6743",
"6745",
"6748",
"677",
"680",
"6802",
"6805",
"6806",
"6809",
"681",
"6810",
"6811",
"682",
"6820",
"6821",
"6822",
"6823",
"6824",
"6825",
"6826",
"6827",
"6828",
"6829",
"683",
"684",
"685",
"6850",
"6851",
"686",
"6860",
"6861",
"6868",
"6869",
"690",
"6901",
"691",
"6910",
"6918",
"692",
"6920",
"6923",
"6924",
"6926",
"6927",
"6928",
"6929",
"693",
"6930",
"6931",
"6938",
"694",
"6940",
"6944",
"6945",
"6948",
"695",
"6950",
"6951",
"6952",
"6953",
"6954",
"6955",
"6958",
"6959",
"696",
"6960",
"6961",
"6962",
"6963",
"6965",
"697",
"6970",
"6979",
"698",
"6981",
"6982",
"6983",
"6984",
"6988",
"6989",
"70",
"700",
"701",
"7010",
"7011",
"7012",
"7013",
"7015",
"7018",
"7019",
"702",
"7020",
"7021",
"7028",
"703",
"7030",
"7038",
"704",
"7040",
"7041",
"7048",
"705",
"7051",
"7052",
"7058",
"706",
"7061",
"7062",
"7068",
"7069",
"707",
"7070",
"7071",
"7072",
"7078",
"7079",
"708",
"7080",
"7083",
"7088",
"7089",
"709",
"7090",
"7092",
"7093",
"7094",
"7098",
"7099",
"710",
"7100",
"7101",
"7102",
"7103",
"7104",
"7105",
"7108",
"7109",
"711",
"7110",
"7112",
"7115",
"7118",
"7119",
"712",
"7121",
"7122",
"7123",
"7129",
"713",
"7131",
"7132",
"7135",
"7138",
"714",
"7140",
"7141",
"7142",
"7143",
"7148",
"7149",
"715",
"7150",
"7151",
"7153",
"7158",
"7159",
"716",
"7161",
"7165",
"7166",
"7168",
"7169",
"717",
"7176",
"7178",
"718",
"7181",
"7182",
"7183",
"7184",
"7185",
"7186",
"7188",
"7189",
"719",
"7190",
"7191",
"7192",
"7193",
"7194",
"7195",
"7196",
"7197",
"7198",
"7199",
"720",
"7200",
"7202",
"7209",
"721",
"7210",
"7211",
"7212",
"7213",
"7214",
"7217",
"7218",
"7219",
"722",
"7220",
"7221",
"7222",
"7223",
"7224",
"7225",
"7226",
"7227",
"7228",
"7229",
"723",
"7230",
"7231",
"7234",
"7235",
"7236",
"7237",
"7238",
"724",
"7240",
"7242",
"7243",
"7244",
"7245",
"7246",
"7248",
"7249",
"725",
"726",
"7260",
"7261",
"7262",
"7263",
"7265",
"7266",
"7267",
"7269",
"727",
"7270",
"7271",
"7273",
"7274",
"7275",
"7276",
"7278",
"7279",
"728",
"7280",
"7281",
"7282",
"7283",
"7284",
"7286",
"7287",
"7288",
"7289",
"729",
"7291",
"7292",
"7293",
"7294",
"7295",
"7296",
"7297",
"7298",
"7299",
"730",
"7300",
"7301",
"7302",
"7308",
"7309",
"731",
"7310",
"7313",
"7318",
"732",
"7320",
"7321",
"7323",
"7324",
"7325",
"733",
"7330",
"7331",
"7332",
"7333",
"7334",
"7335",
"7336",
"7338",
"7339",
"734",
"735",
"7350",
"7354",
"7355",
"7358",
"7359",
"736",
"7360",
"7362",
"7363",
"7366",
"7367",
"7368",
"737",
"7371",
"7372",
"7373",
"7374",
"738",
"7380",
"7381",
"7383",
"7384",
"7385",
"7386",
"7388",
"741",
"7410",
"7419",
"742",
"7420",
"7421",
"7422",
"7423",
"7424",
"7425",
"7428",
"7429",
"743",
"7432",
"7433",
"7436",
"744",
"7440",
"7441",
"7442",
"7444",
"745",
"7451",
"7452",
"7454",
"7455",
"7456",
"7458",
"7459",
"746",
"7460",
"7461",
"7462",
"7463",
"7464",
"7466",
"7468",
"7469",
"747",
"7470",
"7471",
"7472",
"7473",
"7474",
"7476",
"7478",
"748",
"7480",
"7481",
"7482",
"7483",
"7485",
"7486",
"7488",
"749",
"7490",
"750",
"7501",
"7502",
"7503",
"7504",
"7508",
"7509",
"750999",
"751",
"7510",
"7511",
"7512",
"7513",
"7514",
"7515",
"7516",
"7517",
"752",
"7520",
"7521",
"7522",
"7523",
"7524",
"7525",
"7526",
"7528",
"753",
"7530",
"7531",
"7532",
"7533",
"7534",
"7538",
"7539",
"754",
"7542",
"7543",
"7546",
"7547",
"7548",
"755",
"7552",
"7553",
"7555",
"7556",
"756",
"7560",
"7561",
"7564",
"7565",
"7566",
"7568",
"757",
"7570",
"7573",
"758",
"7580",
"7581",
"7583",
"7585",
"7586",
"7587",
"7588",
"7589",
"759",
"7590",
"7592",
"7593",
"7595",
"7596",
"7598",
"760",
"7607",
"763",
"7638",
"764",
"7640",
"765",
"7650",
"7651",
"7652",
"766",
"7661",
"768",
"7689",
"769",
"770",
"7700",
"7702",
"7705",
"7706",
"7707",
"7708",
"771",
"7716",
"7717",
"7718",
"772",
"7721",
"7726",
"773",
"7731",
"7732",
"774",
"7742",
"7746",
"775",
"7755",
"7756",
"7757",
"776",
"7766",
"7767",
"777",
"7771",
"7775",
"7776",
"7778",
"778",
"7783",
"7784",
"7786",
"779",
"7793",
"7795",
"7798",
"780",
"7800",
"7801",
"7802",
"7803",
"7804",
"7805",
"7806",
"7807",
"7808",
"7809",
"781",
"7810",
"7811",
"7812",
"7813",
"7816",
"7817",
"7818",
"7819",
"782",
"7820",
"7821",
"7822",
"7823",
"7824",
"7825",
"7826",
"7827",
"7828",
"783",
"7830",
"7831",
"7832",
"7833",
"7834",
"7835",
"7836",
"7837",
"784",
"7840",
"7841",
"7842",
"7843",
"7844",
"7845",
"7846",
"7847",
"7849",
"785",
"7850",
"7851",
"7852",
"7854",
"7855",
"7856",
"7859",
"786",
"7860",
"7861",
"7862",
"7863",
"7864",
"7865",
"7866",
"7868",
"7869",
"787",
"7870",
"7871",
"7872",
"7873",
"7874",
"7876",
"7879",
"788",
"7881",
"7882",
"7883",
"7884",
"7885",
"7886",
"7887",
"7888",
"7889",
"789",
"7890",
"7891",
"7892",
"7893",
"7894",
"7895",
"7896",
"790",
"7900",
"7901",
"7902",
"7904",
"7905",
"7906",
"7907",
"7908",
"7909",
"791",
"7910",
"7912",
"7913",
"7915",
"7916",
"7919",
"792",
"7920",
"7921",
"7929",
"793",
"7930",
"7931",
"7932",
"7933",
"7934",
"7935",
"7936",
"7937",
"7938",
"7939",
"794",
"7940",
"7942",
"7943",
"7944",
"7945",
"7946",
"7947",
"7948",
"7949",
"795",
"7950",
"7951",
"7953",
"7955",
"7957",
"7958",
"796",
"7960",
"7961",
"7962",
"7963",
"7964",
"7967",
"7969",
"798",
"7981",
"799",
"7990",
"7991",
"7992",
"7993",
"7994",
"7995",
"7998",
"800",
"8000",
"8001",
"8002",
"8003",
"8006",
"8007",
"8008",
"801",
"8010",
"8011",
"8012",
"8013",
"8014",
"8015",
"8016",
"8017",
"8018",
"8019",
"802",
"8020",
"8021",
"8022",
"8023",
"8024",
"8025",
"8026",
"8027",
"8028",
"8029",
"803",
"8030",
"8031",
"8032",
"8033",
"8034",
"8035",
"8036",
"8037",
"804",
"8040",
"8041",
"8042",
"8043",
"8044",
"8046",
"8047",
"8048",
"805",
"8050",
"8052",
"8053",
"8054",
"8055",
"8056",
"8058",
"806",
"8060",
"8061",
"8062",
"8063",
"8064",
"8065",
"8066",
"8068",
"807",
"8070",
"8071",
"8072",
"8073",
"8074",
"8075",
"8076",
"808",
"8080",
"8081",
"8082",
"8083",
"8084",
"8085",
"8088",
"8089",
"810",
"8100",
"8101",
"811",
"8110",
"8111",
"812",
"8120",
"8121",
"8122",
"8123",
"8124",
"8125",
"813",
"8130",
"8131",
"8132",
"8133",
"8134",
"8135",
"8138",
"8139",
"814",
"8140",
"8141",
"815",
"8150",
"8151",
"816",
"8160",
"8161",
"817",
"8170",
"8171",
"819",
"8190",
"8191",
"820",
"8200",
"8201",
"8202",
"8203",
"8208",
"8209",
"821",
"8210",
"8211",
"8212",
"8213",
"822",
"8220",
"8221",
"823",
"8230",
"8231",
"8232",
"8233",
"8234",
"8238",
"8239",
"824",
"8240",
"8241",
"8242",
"8243",
"8244",
"8245",
"8246",
"8247",
"8248",
"8249",
"825",
"8250",
"8251",
"8252",
"8253",
"826",
"8260",
"8261",
"828",
"8280",
"8281",
"830",
"8300",
"831",
"8310",
"8311",
"832",
"8320",
"833",
"8330",
"8331",
"834",
"8340",
"8341",
"835",
"8350",
"836",
"8360",
"8361",
"8362",
"8363",
"8364",
"8365",
"8366",
"837",
"8370",
"8371",
"838",
"8380",
"8381",
"839",
"8390",
"8392",
"8394",
"8396",
"8397",
"840",
"8400",
"8404",
"8406",
"8407",
"8408",
"8409",
"841",
"8411",
"8418",
"842",
"8420",
"843",
"8438",
"8439",
"844",
"8440",
"8441",
"8442",
"8448",
"8449",
"845",
"8450",
"8451",
"846",
"8460",
"8461",
"8469",
"847",
"8470",
"8471",
"8472",
"8479",
"848",
"8488",
"850",
"8500",
"8501",
"8502",
"8504",
"8505",
"8509",
"851",
"8510",
"8512",
"8513",
"8514",
"8515",
"8516",
"8517",
"8518",
"8519",
"852",
"8520",
"8521",
"8522",
"8523",
"8524",
"8525",
"853",
"8530",
"8531",
"854",
"8540",
"860",
"8600",
"8601",
"8602",
"8603",
"8604",
"8605",
"861",
"8610",
"8611",
"8612",
"8613",
"862",
"8620",
"8621",
"8622",
"8623",
"8629",
"863",
"8630",
"8631",
"8632",
"8633",
"8634",
"8635",
"8638",
"8639",
"864",
"8640",
"8641",
"865",
"8650",
"8651",
"866",
"8660",
"8661",
"867",
"8670",
"8671",
"8672",
"8676",
"8677",
"8678",
"8679",
"868",
"8680",
"8681",
"869",
"8690",
"8691",
"870",
"8700",
"8701",
"8702",
"8703",
"8704",
"8708",
"871",
"8710",
"8711",
"8712",
"8713",
"8716",
"872",
"8720",
"8726",
"8728",
"873",
"8730",
"8731",
"8732",
"8733",
"8734",
"8735",
"8736",
"8737",
"8738",
"874",
"8740",
"8741",
"8742",
"8744",
"8745",
"8748",
"8749",
"875",
"8750",
"8751",
"876",
"8760",
"8761",
"877",
"8770",
"878",
"8780",
"8782",
"8783",
"8785",
"8786",
"8787",
"879",
"8790",
"8791",
"8792",
"8793",
"8794",
"8795",
"8796",
"8797",
"8798",
"8799",
"880",
"8800",
"8801",
"8802",
"881",
"8810",
"8811",
"8812",
"882",
"8820",
"8821",
"8822",
"883",
"8830",
"8831",
"8832",
"884",
"8840",
"885",
"8850",
"8851",
"886",
"8860",
"8861",
"887",
"8870",
"8871",
"8872",
"8873",
"8875",
"890",
"8900",
"8901",
"8902",
"891",
"8910",
"8911",
"8912",
"892",
"8920",
"8921",
"8922",
"893",
"8930",
"894",
"8940",
"896",
"8960",
"8961",
"897",
"8970",
"8972",
"8973",
"8977",
"900",
"9000",
"9001",
"9008",
"9009",
"901",
"9010",
"9011",
"9012",
"9013",
"9014",
"9018",
"9019",
"902",
"9020",
"9021",
"9022",
"9023",
"9024",
"9025",
"9028",
"9029",
"903",
"9030",
"9031",
"9032",
"9033",
"9034",
"9035",
"9038",
"9039",
"904",
"9040",
"9041",
"9042",
"9043",
"9044",
"9045",
"9046",
"9047",
"9048",
"905",
"9050",
"9051",
"9052",
"9053",
"9054",
"9055",
"9056",
"906",
"9060",
"9061",
"9063",
"9064",
"9065",
"9067",
"9068",
"907",
"9070",
"9072",
"9074",
"9075",
"908",
"9080",
"9081",
"9082",
"9083",
"9086",
"9089",
"909",
"9090",
"9092",
"9093",
"9094",
"9095",
"9099",
"910",
"9100",
"9102",
"9104",
"9108",
"911",
"9110",
"9112",
"9114",
"9116",
"912",
"9120",
"9122",
"9125",
"913",
"9130",
"9132",
"914",
"9140",
"9142",
"9149",
"915",
"9152",
"916",
"9160",
"9161",
"9162",
"9164",
"9165",
"917",
"9170",
"9171",
"9172",
"9173",
"918",
"9180",
"9181",
"9189",
"919",
"9190",
"9191",
"9196",
"9198",
"920",
"921",
"9210",
"9211",
"9212",
"9213",
"9219",
"922",
"9220",
"9221",
"9222",
"9223",
"9224",
"9228",
"9229",
"923",
"9230",
"9231",
"9232",
"9233",
"9238",
"9239",
"924",
"9240",
"9241",
"9242",
"9243",
"9245",
"9248",
"9249",
"925",
"9252",
"926",
"9260",
"9261",
"927",
"9270",
"9271",
"9272",
"9273",
"9278",
"928",
"9280",
"9281",
"9282",
"930",
"9301",
"9308",
"9309",
"932",
"933",
"9330",
"9331",
"934",
"9340",
"9341",
"9348",
"9349",
"935",
"9351",
"9352",
"936",
"937",
"938",
"939",
"9390",
"9392",
"9393",
"941",
"9410",
"9412",
"942",
"9420",
"9421",
"9422",
"9423",
"943",
"9432",
"9433",
"944",
"9440",
"9442",
"945",
"9450",
"9451",
"9452",
"9453",
"946",
"9462",
"947",
"9471",
"9472",
"9473",
"948",
"9480",
"9484",
"9485",
"950",
"9500",
"9509",
"951",
"9510",
"9513",
"9514",
"9515",
"9517",
"9518",
"952",
"9520",
"9521",
"9523",
"9524",
"9528",
"9529",
"953",
"9530",
"9531",
"9534",
"9535",
"9539",
"954",
"9540",
"955",
"9551",
"9552",
"9553",
"9556",
"9557",
"9558",
"9559",
"956",
"9561",
"9562",
"9563",
"9569",
"957",
"9570",
"9571",
"9578",
"9579",
"958",
"9580",
"9581",
"9582",
"9583",
"9584",
"9585",
"9587",
"9588",
"9589",
"959",
"9590",
"9591",
"9592",
"9593",
"9594",
"9595",
"9596",
"9597",
"9598",
"9599",
"960",
"9600",
"9604",
"9605",
"961",
"9610",
"9614",
"9617",
"9618",
"9619",
"962",
"9623",
"9627",
"963",
"9630",
"9631",
"9635",
"964",
"9642",
"965",
"9650",
"9651",
"9654",
"9656",
"9658",
"966",
"9661",
"9663",
"9664",
"967",
"9670",
"9671",
"9678",
"9679",
"968",
"9680",
"9683",
"9684",
"9685",
"969",
"9690",
"9691",
"9693",
"9694",
"9695",
"9696",
"9697",
"9698",
"970",
"9701",
"9708",
"971",
"9710",
"9711",
"9712",
"9713",
"972",
"9720",
"9721",
"9722",
"9724",
"9725",
"9726",
"9729",
"973",
"9733",
"9735",
"974",
"9744",
"9747",
"975",
"9752",
"9753",
"9754",
"9755",
"976",
"9760",
"9766",
"9767",
"977",
"9773",
"9778",
"9779",
"980",
"9800",
"9802",
"9809",
"982",
"9828",
"983",
"9831",
"9832",
"9839",
"985",
"9851",
"9858",
"986",
"987",
"9878",
"9879",
"988",
"9881",
"989",
"9890",
"9893",
"9894",
"9895",
"9898",
"9899",
"990",
"991",
"9911",
"9912",
"9913",
"9916",
"992",
"9920",
"994",
"9941",
"9942",
"9947",
"9948",
"9949",
"995",
"9950",
"9951",
"9952",
"9953",
"9956",
"9957",
"9958",
"9959",
"996",
"9960",
"9961",
"9962",
"9963",
"9964",
"9965",
"9966",
"9967",
"9968",
"9969",
"997",
"9970",
"9971",
"9972",
"9973",
"9974",
"9975",
"9976",
"9977",
"9979",
"998",
"9980",
"9981",
"9982",
"9983",
"9984",
"9985",
"9986",
"9988",
"9989",
"999",
"9991",
"9992",
"9993",
"9994",
"9995",
"9996",
"9997",
"9998",
"9999",
"9th",
"E000",
"E0000",
"E0008",
"E0009",
"E001",
"E0010",
"E0011",
"E002",
"E0020",
"E0026",
"E003",
"E0030",
"E0031",
"E0032",
"E0039",
"E006",
"E0060",
"E0061",
"E0062",
"E0064",
"E0069",
"E007",
"E0070",
"E0071",
"E0073",
"E0076",
"E008",
"E0080",
"E0089",
"E013",
"E0138",
"E0139",
"E016",
"E0161",
"E0162",
"E019",
"E0190",
"E029",
"E0291",
"E0299",
"E030",
"E800",
"E8002",
"E801",
"E8012",
"E804",
"E8041",
"E8042",
"E805",
"E8052",
"E8058",
"E806",
"E8062",
"E811",
"E8110",
"E812",
"E8120",
"E8121",
"E8122",
"E8123",
"E8126",
"E8127",
"E8129",
"E813",
"E8130",
"E8131",
"E8132",
"E8133",
"E8136",
"E8138",
"E814",
"E8140",
"E8141",
"E8142",
"E8145",
"E8146",
"E8147",
"E815",
"E8150",
"E8151",
"E8152",
"E816",
"E8160",
"E8161",
"E8162",
"E8163",
"E8169",
"E817",
"E8170",
"E8171",
"E8178",
"E818",
"E8180",
"E8181",
"E8182",
"E8187",
"E8188",
"E8189",
"E819",
"E8190",
"E8191",
"E8192",
"E8193",
"E8196",
"E8197",
"E8199",
"E820",
"E8200",
"E821",
"E8210",
"E8211",
"E8212",
"E8216",
"E8217",
"E8219",
"E822",
"E8227",
"E8228",
"E823",
"E8230",
"E8231",
"E8232",
"E8233",
"E8238",
"E824",
"E8240",
"E8241",
"E8242",
"E8248",
"E8249",
"E825",
"E8250",
"E8251",
"E8252",
"E8257",
"E8258",
"E826",
"E8260",
"E8261",
"E827",
"E8278",
"E828",
"E8282",
"E829",
"E8298",
"E831",
"E8311",
"E8314",
"E8318",
"E834",
"E8341",
"E8343",
"E8348",
"E835",
"E8353",
"E838",
"E8381",
"E8384",
"E840",
"E8405",
"E841",
"E8415",
"E848",
"E849",
"E8490",
"E8493",
"E8494",
"E8495",
"E8496",
"E8497",
"E8498",
"E8499",
"E850",
"E8500",
"E8501",
"E8502",
"E8503",
"E8504",
"E8508",
"E851",
"E852",
"E8528",
"E8529",
"E853",
"E8532",
"E8538",
"E854",
"E8540",
"E8541",
"E8542",
"E8543",
"E8548",
"E855",
"E8550",
"E8551",
"E8552",
"E8554",
"E8555",
"E8556",
"E856",
"E857",
"E858",
"E8580",
"E8581",
"E8582",
"E8583",
"E8584",
"E8585",
"E8586",
"E8587",
"E8588",
"E8589",
"E860",
"E8600",
"E8603",
"E8609",
"E861",
"E8613",
"E8619",
"E862",
"E8624",
"E863",
"E8637",
"E864",
"E8641",
"E865",
"E8654",
"E8655",
"E866",
"E8663",
"E8668",
"E8669",
"E869",
"E8694",
"E8698",
"E870",
"E8700",
"E8702",
"E8703",
"E8704",
"E8705",
"E8706",
"E8708",
"E8709",
"E871",
"E8710",
"E8714",
"E8716",
"E8717",
"E8718",
"E873",
"E8735",
"E874",
"E8740",
"E8742",
"E8744",
"E8748",
"E876",
"E8761",
"E8762",
"E8764",
"E8767",
"E8768",
"E8769",
"E878",
"E8780",
"E8781",
"E8782",
"E8783",
"E8784",
"E8785",
"E8786",
"E8788",
"E8789",
"E879",
"E8790",
"E8791",
"E8792",
"E8793",
"E8794",
"E8795",
"E8796",
"E8797",
"E8798",
"E8799",
"E880",
"E8800",
"E8801",
"E8809",
"E881",
"E8810",
"E8811",
"E882",
"E883",
"E8830",
"E8839",
"E884",
"E8840",
"E8841",
"E8842",
"E8843",
"E8844",
"E8845",
"E8846",
"E8849",
"E885",
"E8850",
"E8851",
"E8852",
"E8853",
"E8854",
"E8859",
"E886",
"E8860",
"E887",
"E888",
"E8880",
"E8881",
"E8888",
"E8889",
"E890",
"E8902",
"E8908",
"E891",
"E8918",
"E899",
"E900",
"E9000",
"E9001",
"E901",
"E9010",
"E9011",
"E9018",
"E9019",
"E905",
"E9051",
"E9053",
"E906",
"E9060",
"E9063",
"E9064",
"E9068",
"E908",
"E9081",
"E910",
"E9100",
"E9102",
"E9108",
"E9109",
"E911",
"E912",
"E913",
"E9132",
"E9138",
"E915",
"E916",
"E917",
"E9170",
"E9173",
"E9174",
"E9175",
"E9177",
"E9178",
"E9179",
"E918",
"E919",
"E9190",
"E9192",
"E9193",
"E9194",
"E9196",
"E9198",
"E920",
"E9200",
"E9201",
"E9203",
"E9204",
"E9205",
"E9208",
"E9209",
"E921",
"E9211",
"E922",
"E9220",
"E9222",
"E9225",
"E9229",
"E924",
"E9240",
"E9241",
"E9242",
"E9248",
"E9249",
"E925",
"E9250",
"E926",
"E9262",
"E927",
"E9270",
"E9274",
"E9278",
"E928",
"E9283",
"E9288",
"E9289",
"E929",
"E9290",
"E9291",
"E9292",
"E9293",
"E9294",
"E9295",
"E9298",
"E9299",
"E930",
"E9300",
"E9301",
"E9303",
"E9304",
"E9305",
"E9306",
"E9307",
"E9308",
"E9309",
"E931",
"E9310",
"E9313",
"E9314",
"E9315",
"E9317",
"E9318",
"E9319",
"E932",
"E9320",
"E9322",
"E9323",
"E9324",
"E9325",
"E9328",
"E9329",
"E933",
"E9330",
"E9331",
"E9334",
"E9335",
"E9338",
"E934",
"E9340",
"E9342",
"E9343",
"E9344",
"E9345",
"E9346",
"E9347",
"E9348",
"E935",
"E9351",
"E9352",
"E9353",
"E9354",
"E9356",
"E9357",
"E9358",
"E9359",
"E936",
"E9360",
"E9361",
"E9363",
"E9364",
"E937",
"E9370",
"E9378",
"E9379",
"E938",
"E9380",
"E9382",
"E9383",
"E9384",
"E9385",
"E9386",
"E9387",
"E9389",
"E939",
"E9390",
"E9391",
"E9392",
"E9393",
"E9394",
"E9395",
"E9397",
"E9398",
"E9399",
"E940",
"E9401",
"E9408",
"E941",
"E9410",
"E9411",
"E9412",
"E9413",
"E9419",
"E942",
"E9420",
"E9421",
"E9422",
"E9424",
"E9425",
"E9426",
"E9429",
"E943",
"E9430",
"E9433",
"E9438",
"E944",
"E9441",
"E9443",
"E9444",
"E9445",
"E9447",
"E945",
"E9451",
"E9452",
"E9453",
"E9455",
"E9457",
"E946",
"E9460",
"E9463",
"E9466",
"E947",
"E9470",
"E9478",
"E9479",
"E949",
"E9496",
"E9499",
"E950",
"E9500",
"E9501",
"E9502",
"E9503",
"E9504",
"E9505",
"E9506",
"E9507",
"E9509",
"E953",
"E9530",
"E9538",
"E954",
"E955",
"E9550",
"E9554",
"E9559",
"E956",
"E957",
"E9570",
"E9571",
"E9572",
"E9579",
"E958",
"E9580",
"E9581",
"E9583",
"E9585",
"E9588",
"E9589",
"E959",
"E960",
"E9600",
"E9601",
"E962",
"E9620",
"E963",
"E964",
"E965",
"E9650",
"E9651",
"E9654",
"E9659",
"E966",
"E967",
"E9670",
"E9671",
"E9673",
"E9674",
"E9677",
"E9678",
"E9679",
"E968",
"E9682",
"E9687",
"E9688",
"E9689",
"E969",
"E970",
"E975",
"E976",
"E977",
"E980",
"E9800",
"E9801",
"E9803",
"E9804",
"E9805",
"E9809",
"E982",
"E9821",
"E985",
"E9850",
"E9854",
"E986",
"E987",
"E9871",
"E988",
"E9888",
"E9889",
"E989",
"E999",
"E9991",
"V011",
"V016",
"V017",
"V0179",
"V018",
"V0189",
"V020",
"V023",
"V024",
"V025",
"V0251",
"V0252",
"V0253",
"V0254",
"V0259",
"V026",
"V0261",
"V0262",
"V029",
"V037",
"V038",
"V0381",
"V0382",
"V0389",
"V045",
"V048",
"V0481",
"V0482",
"V053",
"V058",
"V061",
"V063",
"V065",
"V066",
"V071",
"V073",
"V0739",
"V074",
"V078",
"V08",
"V090",
"V091",
"V095",
"V0950",
"V097",
"V0971",
"V098",
"V0980",
"V0981",
"V099",
"V0990",
"V0991",
"V100",
"V1000",
"V1001",
"V1002",
"V1003",
"V1004",
"V1005",
"V1006",
"V1007",
"V1009",
"V101",
"V1011",
"V1012",
"V102",
"V1020",
"V1021",
"V1022",
"V1029",
"V103",
"V104",
"V1041",
"V1042",
"V1043",
"V1044",
"V1046",
"V1047",
"V1049",
"V105",
"V1050",
"V1051",
"V1052",
"V1053",
"V1059",
"V106",
"V1060",
"V1061",
"V1062",
"V1069",
"V107",
"V1071",
"V1072",
"V1079",
"V108",
"V1081",
"V1082",
"V1083",
"V1084",
"V1085",
"V1086",
"V1087",
"V1088",
"V1089",
"V109",
"V1090",
"V1091",
"V110",
"V111",
"V113",
"V118",
"V120",
"V1201",
"V1202",
"V1203",
"V1204",
"V1209",
"V122",
"V124",
"V1241",
"V1242",
"V125",
"V1250",
"V1251",
"V1252",
"V1253",
"V1254",
"V1255",
"V1259",
"V126",
"V1261",
"V127",
"V1271",
"V1272",
"V1279",
"V130",
"V1301",
"V1302",
"V1309",
"V135",
"V1351",
"V1352",
"V136",
"V1364",
"V1365",
"V1369",
"V138",
"V1381",
"V1389",
"V140",
"V141",
"V142",
"V143",
"V145",
"V146",
"V148",
"V150",
"V1501",
"V1502",
"V1504",
"V1505",
"V1506",
"V1507",
"V1508",
"V1509",
"V151",
"V152",
"V1529",
"V153",
"V154",
"V1541",
"V1542",
"V155",
"V1551",
"V1552",
"V1553",
"V1559",
"V158",
"V1581",
"V1582",
"V1584",
"V1585",
"V1586",
"V1588",
"V1589",
"V160",
"V161",
"V162",
"V163",
"V164",
"V1641",
"V1642",
"V1643",
"V1649",
"V165",
"V1651",
"V1652",
"V1659",
"V166",
"V167",
"V168",
"V169",
"V170",
"V171",
"V173",
"V174",
"V1741",
"V1749",
"V175",
"V180",
"V181",
"V1811",
"V1819",
"V182",
"V183",
"V185",
"V1851",
"V1859",
"V186",
"V1869",
"V189",
"V195",
"V198",
"V202",
"V222",
"V230",
"V239",
"V250",
"V2501",
"V252",
"V254",
"V2541",
"V265",
"V2651",
"V2652",
"V270",
"V271",
"V272",
"V290",
"V293",
"V300",
"V3000",
"V3001",
"V301",
"V400",
"V403",
"V4031",
"V420",
"V421",
"V422",
"V425",
"V426",
"V427",
"V428",
"V4281",
"V4282",
"V4283",
"V4284",
"V4289",
"V430",
"V431",
"V433",
"V434",
"V435",
"V436",
"V4361",
"V4363",
"V4364",
"V4365",
"V438",
"V4382",
"V440",
"V441",
"V442",
"V443",
"V444",
"V445",
"V4450",
"V4451",
"V4459",
"V446",
"V448",
"V449",
"V450",
"V4501",
"V4502",
"V4509",
"V451",
"V4511",
"V4512",
"V452",
"V453",
"V454",
"V456",
"V4561",
"V4569",
"V457",
"V4571",
"V4572",
"V4573",
"V4574",
"V4575",
"V4576",
"V4577",
"V4578",
"V4579",
"V458",
"V4581",
"V4582",
"V4585",
"V4586",
"V4587",
"V4588",
"V4589",
"V461",
"V4611",
"V4614",
"V462",
"V463",
"V468",
"V469",
"V486",
"V489",
"V496",
"V4960",
"V4961",
"V4962",
"V4963",
"V4965",
"V4966",
"V497",
"V4971",
"V4972",
"V4973",
"V4975",
"V4976",
"V498",
"V4981",
"V4983",
"V4984",
"V4985",
"V4986",
"V4987",
"V4989",
"V502",
"V504",
"V5041",
"V5049",
"V51",
"V510",
"V530",
"V5301",
"V5302",
"V5309",
"V533",
"V5331",
"V5332",
"V5339",
"V536",
"V537",
"V539",
"V5391",
"V5399",
"V540",
"V5401",
"V541",
"V5410",
"V5411",
"V5412",
"V5413",
"V5415",
"V5416",
"V5417",
"V5419",
"V542",
"V5422",
"V5423",
"V5426",
"V5427",
"V548",
"V5481",
"V5482",
"V5489",
"V549",
"V550",
"V551",
"V552",
"V553",
"V554",
"V555",
"V556",
"V558",
"V560",
"V561",
"V568",
"V580",
"V581",
"V5811",
"V5812",
"V583",
"V5831",
"V584",
"V5841",
"V5843",
"V5844",
"V5849",
"V586",
"V5861",
"V5862",
"V5863",
"V5864",
"V5865",
"V5866",
"V5867",
"V5869",
"V587",
"V5873",
"V588",
"V5881",
"V5883",
"V596",
"V600",
"V601",
"V602",
"V604",
"V608",
"V610",
"V6103",
"V6104",
"V6107",
"V6109",
"V611",
"V6110",
"V6111",
"V612",
"V6129",
"V614",
"V6141",
"V6142",
"V618",
"V620",
"V624",
"V625",
"V626",
"V628",
"V6282",
"V6284",
"V6285",
"V6289",
"V632",
"V638",
"V640",
"V6406",
"V641",
"V642",
"V643",
"V644",
"V6441",
"V6442",
"V6443",
"V652",
"V653",
"V654",
"V6542",
"V6549",
"V655",
"V667",
"V671",
"V672",
"V694",
"V698",
"V703",
"V707",
"V708",
"V714",
"V716",
"V721",
"V728",
"V7281",
"V741",
"V765",
"V7651",
"V789",
"V812",
"V838",
"V8389",
"V840",
"V8401",
"V8409",
"V848",
"V8489",
"V850",
"V851",
"V852",
"V8521",
"V8522",
"V8523",
"V8524",
"V8525",
"V853",
"V8530",
"V8531",
"V8532",
"V8533",
"V8534",
"V8535",
"V8536",
"V8537",
"V8538",
"V8539",
"V854",
"V8541",
"V8542",
"V8543",
"V8544",
"V8545",
"V860",
"V861",
"V870",
"V8709",
"V872",
"V874",
"V8741",
"V8745",
"V880",
"V8801",
"V881",
"V8811",
"V8812",
"V882",
"V8821",
"V901",
"V9010",
"V903",
"V9039",
"V908",
"V9081",
"V9089",
"V910",
"V9103",
"abdomen",
"abdominal",
"abducens",
"able",
"abnormal",
"abnormalities",
"abnormality",
"abo",
"abortion",
"abrasion",
"abscess",
"absence",
"abuse",
"acanthosis",
"accessory",
"accident",
"accidental",
"accidentally",
"accidents",
"acetabulum",
"acetonuria",
"achalasia",
"achieved",
"achilles",
"acid",
"acidbase",
"acidosis",
"acids",
"acne",
"acoustic",
"acquired",
"acromegaly",
"acromial",
"acromioclavicular",
"acting",
"actinic",
"actinomycotic",
"action",
"active",
"activities",
"activity",
"acuminatum",
"acute",
"adem",
"adenoids",
"adenovirus",
"adequate",
"adhesions",
"adhesive",
"adiposity",
"adjustment",
"administration",
"administrative",
"admission",
"adnexa",
"adolescents",
"adrenal",
"adrenergics",
"adrenogenital",
"adult",
"adultpediatric",
"adults",
"adverse",
"affecting",
"affections",
"affective",
"aftercare",
"agenesis",
"agent",
"agents",
"agerelated",
"aggressive",
"agitans",
"agoraphobia",
"agricultural",
"air",
"aircraft",
"airway",
"alcohol",
"alcoholic",
"alcoholinduced",
"alcoholism",
"alexia",
"alighting",
"alimentary",
"alkalis",
"alkaloids",
"alkalosis",
"allergen",
"allergens",
"allergic",
"allergy",
"allied",
"alone",
"alopecia",
"alpha",
"alpha1antitrypsin",
"alpine",
"alteration",
"alterations",
"altered",
"alveolar",
"alveolitis",
"alzheimers",
"amblyopia",
"american",
"aminoacid",
"amnesia",
"amnestic",
"amniotic",
"amphetamine",
"amphetamines",
"ampulla",
"amputation",
"amyloidosis",
"amyotrophic",
"anaerobes",
"anal",
"analgesic",
"analgesics",
"anaphylactic",
"anaphylaxis",
"anaplastic",
"anastomosis",
"anatomical",
"andor",
"anemia",
"anemias",
"anesthesia",
"anesthetics",
"aneurysm",
"angiitis",
"angina",
"angiodysplasia",
"angioneurotic",
"angiopathy",
"angioplasty",
"angle",
"angleclosure",
"animal",
"animaldrawn",
"animals",
"anisocoria",
"ankle",
"ankylosing",
"ankylosis",
"anomalies",
"anomalous",
"anomaly",
"anorexia",
"another",
"anoxic",
"antacids",
"antagonists",
"antepartum",
"anterior",
"anterolateral",
"antiadrenergics",
"antiallergic",
"antiarteriosclerotic",
"antiasthmatics",
"antibiotic",
"antibiotics",
"anticholinergics",
"anticoagulant",
"anticoagulants",
"anticonvulsant",
"anticonvulsants",
"antidepressant",
"antidepressants",
"antidiabetic",
"antidiarrheal",
"antiemetic",
"antifungal",
"antigastric",
"antigen",
"antihypertensive",
"antiinfective",
"antiinfectives",
"antiinflammatories",
"antiinflammatory",
"antilipemic",
"antimalarials",
"antimuscarinics",
"antimycobacterial",
"antineoplastic",
"antiparkinsonism",
"antiphlogistics",
"antiplateletantithrombotic",
"antiprotozoal",
"antipsychotics",
"antipyretic",
"antipyretics",
"antirheumatics",
"antisocial",
"antithyroid",
"antitussives",
"antiviral",
"antrum",
"anuria",
"anus",
"anxiety",
"anxiolytic",
"aorta",
"aortic",
"aortitis",
"aortocoronary",
"apex",
"aphasia",
"aphonia",
"aphthae",
"apical",
"aplastic",
"apnea",
"apparatus",
"appearance",
"appendicitis",
"appendix",
"appliances",
"application",
"applied",
"apraxia",
"arachnids",
"arch",
"area",
"areata",
"arising",
"arm",
"aromatic",
"around",
"arousal",
"arrest",
"arsenic",
"artefacta",
"arterial",
"arteries",
"arterioles",
"arteriosus",
"arteriovenous",
"arteritis",
"artery",
"arthralgia",
"arthritis",
"arthrodesis",
"arthropathy",
"arthropod",
"arthroscopic",
"articular",
"artificial",
"asbestos",
"asbestosis",
"ascariasis",
"ascending",
"ascites",
"ascorbic",
"ascus",
"aseptic",
"aspect",
"aspergillosis",
"asphyxia",
"asphyxiation",
"aspiration",
"aspirin",
"assault",
"associated",
"asthma",
"asthmaticus",
"astragalus",
"asymptomatic",
"ataxia",
"atelectasis",
"atheroembolism",
"atherosclerosis",
"athletics",
"atonia",
"atony",
"atopic",
"atresia",
"atrial",
"atrioventricular",
"atrophic",
"atrophicae",
"atrophy",
"attack",
"attacks",
"attention",
"atypical",
"auditory",
"aura",
"aureus",
"auricle",
"autistic",
"autoimmune",
"autologous",
"automatic",
"autonomic",
"autosomal",
"averse",
"avian",
"avulsion",
"awaiting",
"awareness",
"axilla",
"axillary",
"b",
"b12",
"b19",
"babesiosis",
"bacilli",
"bacillus",
"back",
"backache",
"background",
"bacteremia",
"bacteria",
"bacterial",
"bacteriological",
"bacterium",
"bacteriuria",
"bacteroides",
"balance",
"balanitis",
"balanoposthitis",
"bandemia",
"bandshaped",
"barbiturates",
"bariatric",
"barretts",
"bartholins",
"bartonellosis",
"basal",
"base",
"baseball",
"basilar",
"basketball",
"bcomplex",
"beard",
"beats",
"bed",
"bees",
"behavior",
"behavioral",
"behcets",
"bells",
"benign",
"benzodiazepinebased",
"bereavement",
"beriberi",
"beta",
"better",
"beverages",
"biceps",
"bicipital",
"bifida",
"bike",
"bilateral",
"bile",
"biliary",
"bilious",
"bilirubin",
"bimalleolar",
"biological",
"bipolar",
"birth",
"bite",
"black",
"blactam",
"bladder",
"blastomycosis",
"bleb",
"bleeding",
"blepharitis",
"blepharospasm",
"blindness",
"blister",
"blisters",
"block",
"blood",
"bloodclot",
"bloodforming",
"bloodstream",
"blowout",
"blunt",
"boarding",
"boat",
"bodies",
"body",
"boiling",
"bone",
"bones",
"border",
"borderline",
"born",
"botulism",
"bowel",
"boxing",
"boyfriend",
"brachial",
"bradycardia",
"brain",
"branch",
"branches",
"branchial",
"brawl",
"breast",
"breath",
"breech",
"brief",
"broad",
"broken",
"bronchiectasis",
"bronchiolitis",
"bronchitis",
"bronchopneumonia",
"bronchopulmonary",
"bronchospasm",
"bronchus",
"brucellosis",
"buccal",
"buddchiari",
"buergers",
"building",
"bulbar",
"bulbus",
"bulimia",
"bullous",
"bundle",
"bunion",
"buphthalmos",
"burkitts",
"burn",
"burns",
"bursa",
"bursae",
"bursitis",
"buttock",
"butyrophenonebased",
"bypass",
"c",
"c1c4",
"c5c7",
"cachexia",
"caffeine",
"calcaneal",
"calcaneus",
"calcification",
"calcified",
"calcifying",
"calcium",
"calculi",
"calculus",
"calf",
"callosities",
"caloric",
"campylobacter",
"canal",
"candida",
"candidal",
"candidiasis",
"cannabis",
"capillary",
"capitate",
"capitis",
"capsular",
"capsulatum",
"capsule",
"capsulitis",
"carbamate",
"carbohydrate",
"carbon",
"carbuncle",
"carcinoid",
"carcinoma",
"cardia",
"cardiac",
"cardiogenic",
"cardiomegaly",
"cardiomyopathies",
"cardiomyopathy",
"cardiospasm",
"cardiotonic",
"cardiovascular",
"care",
"caregiver",
"caries",
"carinatum",
"carotid",
"carpal",
"carried",
"carrier",
"cartilage",
"cartilages",
"caruncle",
"cat",
"cataplexy",
"cataract",
"catatonic",
"cathartics",
"catheter",
"catheterization",
"cauda",
"caught",
"causalgia",
"cause",
"caused",
"causes",
"causing",
"caustic",
"cava",
"cavitation",
"cavities",
"cavity",
"cecum",
"celiac",
"cell",
"cells",
"cellulitis",
"central",
"cephalosporin",
"cephalosporins",
"cerebellar",
"cerebellum",
"cerebral",
"cerebrospinal",
"cerebrovascular",
"cerebrum",
"certain",
"cerumen",
"cervical",
"cervicalgia",
"cervicitis",
"cervix",
"cesarean",
"chagas",
"chair",
"chalazion",
"chambers",
"change",
"changes",
"channels",
"check",
"cheek",
"chemical",
"chemicals",
"chemistry",
"chemotherapy",
"chest",
"cheynestokes",
"chiasm",
"chickenpox",
"chiefly",
"child",
"childbirth",
"childhood",
"chills",
"chlamydial",
"chloral",
"choanal",
"cholangitis",
"cholecystectomy",
"cholecystitis",
"cholelithiasis",
"choleperitonitis",
"cholera",
"cholesteatoma",
"cholesterin",
"cholesterolosis",
"cholinergics",
"chondritis",
"chondrocalcinosis",
"chondrodystrophy",
"chordae",
"chorea",
"choreas",
"choriomeningitis",
"chorioretinitis",
"choroid",
"choroidal",
"chromosome",
"chronic",
"chronicus",
"ciliary",
"circadian",
"circle",
"circulating",
"circulation",
"circulatory",
"circumcision",
"circumscribed",
"circumstances",
"cirrhosis",
"civilian",
"classifiable",
"classified",
"claudication",
"clavicle",
"claw",
"cleansing",
"cleft",
"cliff",
"climacteric",
"clinical",
"clonorchiasis",
"closed",
"clostridium",
"closure",
"clotting",
"cluster",
"coagulants",
"coagulation",
"coal",
"coarctation",
"cocaine",
"coccidioidomycosis",
"coccyx",
"cochlea",
"cognition",
"cognitive",
"cold",
"coli",
"colitis",
"collagen",
"collapse",
"collateral",
"colles",
"collision",
"colon",
"colonic",
"color",
"colostomy",
"column",
"coma",
"combinations",
"combined",
"commode",
"common",
"communicable",
"communicating",
"compartment",
"complaint",
"complete",
"completed",
"completepartial",
"complex",
"complicated",
"complicating",
"complication",
"complications",
"complicationwithout",
"compounds",
"compression",
"concussion",
"condensans",
"condition",
"conditions",
"conduct",
"conduction",
"conductive",
"condylar",
"condyle",
"condyles",
"condyloma",
"confinement",
"confirmed",
"conflagration",
"confusion",
"congenita",
"congenital",
"congestion",
"congestive",
"conjugate",
"conjunctiva",
"conjunctival",
"conjunctivitis",
"connection",
"connective",
"conns",
"conscience",
"conscious",
"consciousness",
"constipation",
"constituents",
"constrictive",
"construction",
"contact",
"contagiosum",
"contents",
"continua",
"continuous",
"contraceptive",
"contraceptives",
"contracture",
"contraindication",
"control",
"contusion",
"conversion",
"converted",
"convulsions",
"convulsive",
"coordination",
"copper",
"cor",
"coracoid",
"cord",
"cordis",
"cords",
"cornea",
"corneal",
"corns",
"coronary",
"coronoid",
"corpus",
"correct",
"corrected",
"corrosive",
"cortex",
"cortical",
"corticoadrenal",
"cough",
"counseling",
"count",
"coxsackie",
"cracked",
"cramp",
"cranial",
"craniopharyngeal",
"crashing",
"creactive",
"crew",
"crisis",
"critical",
"crp",
"cruciate",
"crushing",
"crustaceans",
"cryptococcal",
"cryptococcosis",
"cryptogenic",
"cryptosporidiosis",
"crystal",
"crystalline",
"crystals",
"cuboid",
"cuff",
"culture",
"cumulative",
"cuneiform",
"curb",
"current",
"curvature",
"cushings",
"cushion",
"cut",
"cutaneous",
"cutaneousvesicostomy",
"cutting",
"cyanides",
"cyanosis",
"cycle",
"cyclic",
"cyclist",
"cyclothymic",
"cylinders",
"cyst",
"cystic",
"cystica",
"cysticercosis",
"cystitis",
"cystocele",
"cystoid",
"cystostomy",
"cysts",
"cytomegalic",
"cytomegaloviral",
"dacryoadenitis",
"dacryocystitis",
"daggers",
"damage",
"dander",
"deaf",
"death",
"debility",
"decision",
"decreased",
"deep",
"defect",
"defects",
"defiant",
"defibrillator",
"defibrination",
"deficiencies",
"deficiency",
"deficit",
"deficits",
"defined",
"deformans",
"deformities",
"deformity",
"degenerated",
"degeneration",
"degenerations",
"degenerative",
"degree",
"degreenot",
"dehydration",
"dehydrogenase",
"delay",
"delayed",
"delays",
"deletion",
"deletions",
"delirium",
"delivered",
"delivery",
"delta",
"delusional",
"delusions",
"dementia",
"demulcents",
"demyelinating",
"dental",
"dentofacial",
"dependence",
"depletion",
"deposits",
"depressants",
"depressed",
"depressive",
"derangement",
"derivative",
"derivatives",
"dermatitis",
"dermatographic",
"dermatomycoses",
"dermatomycosis",
"dermatomyositis",
"dermatophytosis",
"dermatoses",
"des",
"descending",
"desensitization",
"detachment",
"detergents",
"deterrents",
"detrusor",
"development",
"developmental",
"deviated",
"deviation",
"device",
"devices",
"diabetes",
"diabetic",
"diagnosis",
"dialysis",
"diaper",
"diaphragm",
"diaphragmatic",
"diarrhea",
"diastasis",
"diastolic",
"dicalcium",
"dichorionicdiamniotic",
"diencephalohypophyseal",
"dietary",
"dietetics",
"diethylstilbestrol",
"dieulafoy",
"different",
"differentiated",
"difficile",
"difficulties",
"difficulty",
"diffuse",
"digestive",
"digestivegenital",
"digit",
"digital",
"digits",
"dilatation",
"diphtheria",
"diphtheriatetanuspertussis",
"diplegia",
"diplopia",
"diptheriatetanus",
"disabilities",
"disaccharidase",
"disaccharide",
"disc",
"discharge",
"disciform",
"discomfort",
"disease",
"diseases",
"disfigurements",
"disinfectants",
"dislocation",
"disorder",
"disorders",
"disorganized",
"displacement",
"disruption",
"dissection",
"disseminated",
"dissociated",
"dissociative",
"distal",
"distant",
"distortions",
"distress",
"disturbance",
"disturbances",
"disuse",
"diuretics",
"diverticulitis",
"diverticulosis",
"diverticulum",
"diving",
"divorce",
"dizziness",
"dog",
"dome",
"domestic",
"dominant",
"done",
"donors",
"dorsal",
"dorsalis",
"doubling",
"downhill",
"downs",
"drainage",
"drawn",
"dressing",
"drip",
"driver",
"drop",
"drowning",
"drug",
"druginduced",
"drugresistant",
"drugs",
"drum",
"dt",
"dtap",
"dtp",
"dual",
"duanes",
"duboisii",
"duct",
"ducts",
"ductus",
"due",
"duodenal",
"duodenitis",
"duodenum",
"dura",
"dural",
"duration",
"dwarfism",
"dwelling",
"dye",
"dysarthria",
"dyschromia",
"dysfunction",
"dysfunctions",
"dysgenesis",
"dyskinesia",
"dyslexia",
"dysmenorrhea",
"dysmetabolic",
"dyspepsia",
"dysphagia",
"dysphasia",
"dysphonia",
"dysplasia",
"dysreflexia",
"dysrhythmia",
"dysrhythmias",
"dyssynergia",
"dysthymic",
"dystonia",
"dystrophies",
"dystrophy",
"dysuria",
"e",
"ear",
"eardrum",
"early",
"eastern",
"eaten",
"eating",
"ebsteins",
"ecchymoses",
"ecg",
"echinococcosis",
"echoencephalogram",
"eclampsia",
"ectasia",
"ectopic",
"ectropion",
"eczema",
"edema",
"edentulism",
"eeg",
"effect",
"effects",
"effusion",
"ehlersdanlos",
"ehrlichiosis",
"eight",
"ekg",
"elbow",
"elderly",
"electric",
"electrocardiogram",
"electrocution",
"electrode",
"electroencephalogram",
"electrolyte",
"electrolytic",
"elevated",
"elevation",
"elliptocytosis",
"elsewhere",
"embolism",
"embolus",
"emesis",
"emollients",
"emotional",
"emotionalpsychological",
"emotions",
"emphysema",
"emphysematous",
"emptying",
"empyema",
"enabling",
"enamel",
"encephalitis",
"encephalocele",
"encephalomyelitis",
"encephalopathy",
"encounter",
"end",
"endocardial",
"endocarditis",
"endocervicitis",
"endocervix",
"endocrine",
"endometrial",
"endometriosis",
"endometritis",
"endophthalmitis",
"endoscopic",
"endosseous",
"engaged",
"enlargement",
"enophthalmos",
"entanglement",
"entering",
"enteritis",
"enterococcus",
"enterocolitis",
"enterohemorrhagic",
"enterostomy",
"enterovirus",
"enthesopathy",
"entoptic",
"enuresis",
"environmental",
"enzyme",
"enzymes",
"eosinophilia",
"eosinophilic",
"epicondylitis",
"epidermal",
"epididymitis",
"epididymoorchitis",
"epigastric",
"epiglottica",
"epiglottis",
"epiglottitis",
"epilepsia",
"epilepsy",
"epileptic",
"epiphora",
"epiphysis",
"episcleritis",
"episode",
"episodic",
"epistaxis",
"epitheliopathy",
"equina",
"equine",
"equinovarus",
"equinus",
"equipment",
"er",
"eructation",
"eruption",
"erysipelas",
"erythema",
"erythematosus",
"erythematous",
"erythromelalgia",
"erythromycin",
"escalator",
"escherichia",
"esophageal",
"esophagitis",
"esophagostomy",
"esophagus",
"esotropia",
"essences",
"essential",
"estrangement",
"estrogen",
"ethmoidal",
"ethyl",
"euthyroid",
"evans",
"event",
"evidence",
"exacerbation",
"examination",
"examinations",
"exanthem",
"exanthemata",
"excavatum",
"except",
"excessive",
"excitation",
"excluding",
"excretion",
"executive",
"exercise",
"exfoliation",
"existing",
"exophoria",
"exophthalmos",
"exostosis",
"exotropia",
"expectorants",
"explantation",
"explosion",
"explosive",
"explosives",
"exposure",
"expressive",
"expulsive",
"extending",
"extensor",
"externa",
"external",
"externum",
"extracorporeal",
"extraction",
"extradural",
"extrahepatic",
"extranodal",
"extrapyramidal",
"extravasation",
"extreme",
"extremes",
"extremities",
"extremity",
"extrinsic",
"exudative",
"eye",
"eyeball",
"eyelid",
"eyelids",
"eyes",
"face",
"facial",
"facilities",
"facility",
"factitia",
"factitious",
"factor",
"factors",
"failure",
"falciparum",
"fall",
"falling",
"fallopian",
"fallot",
"false",
"familial",
"family",
"fascia",
"fascial",
"fascicular",
"fasciitis",
"fasting",
"fat",
"father",
"fatigue",
"fatty",
"feared",
"features",
"febrile",
"fecal",
"feces",
"feeding",
"feigning",
"felon",
"feltys",
"female",
"femoral",
"femur",
"fertilizers",
"fetal",
"fetus",
"fever",
"fibrillation",
"fibrinolysisaffecting",
"fibroelastosis",
"fibromatoses",
"fibromatosis",
"fibroplasia",
"fibrosis",
"fibula",
"field",
"fifth",
"fight",
"filariasis",
"film",
"finding",
"findings",
"finger",
"fingers",
"fire",
"firearm",
"firearms",
"first",
"firstdegree",
"fish",
"fissure",
"fistula",
"fistulas",
"fitting",
"five",
"fixation",
"flaccid",
"flag",
"flail",
"flat",
"flatulence",
"flexneri",
"flexure",
"floor",
"fluency",
"fluid",
"fluids",
"fluroquinolones",
"flushing",
"flutter",
"focal",
"folatedeficiency",
"follicles",
"follicular",
"following",
"followup",
"food",
"foods",
"foot",
"football",
"forearm",
"foregut",
"forehead",
"foreign",
"form",
"formation",
"forming",
"forms",
"fossa",
"found",
"four",
"fourth",
"fracture",
"fractured",
"fractures",
"fragile",
"fragilis",
"fragments",
"frequency",
"frequent",
"friction",
"friedlnders",
"friedreichs",
"frontal",
"frontotemporal",
"frostbite",
"fruits",
"full",
"fullthickness",
"fully",
"fume",
"fumes",
"function",
"functional",
"fundus",
"fungi",
"fungoides",
"furniture",
"furuncle",
"fusion",
"g",
"gain",
"gait",
"galactorrhea",
"gallbladder",
"gallstone",
"gamma",
"ganglia",
"ganglion",
"gangrene",
"gangrenosum",
"gardening",
"gas",
"gaseous",
"gases",
"gastric",
"gastrin",
"gastritis",
"gastroduodenitis",
"gastroenteritis",
"gastroesophageal",
"gastrointestinal",
"gastrojejunal",
"gastroparesis",
"gastrostomy",
"gaze",
"gender",
"general",
"generalized",
"genetic",
"geniculate",
"genital",
"genitalia",
"genitals",
"genitourinary",
"geographic",
"gerstmannstrusslerscheinker",
"gestation",
"giant",
"giardiasis",
"giddiness",
"gigantism",
"gingival",
"gingivitis",
"gingivostomatitis",
"girdle",
"girdles",
"gland",
"glands",
"glandular",
"glass",
"glaucoma",
"glenoid",
"global",
"globe",
"globulin",
"glomerulonephritis",
"glossitis",
"glossodynia",
"glossopharyngeal",
"glottis",
"glucocorticoid",
"glucose",
"glutathione",
"glycogenosis",
"glycosides",
"glycosuria",
"goiter",
"golf",
"gonadal",
"gonococcal",
"goodpastures",
"gout",
"gouty",
"grade",
"graft",
"graftversushost",
"gramnegative",
"grams",
"grand",
"granulation",
"granuloma",
"granulomatosis",
"gravid",
"gravidarum",
"gravis",
"great",
"greater",
"groin",
"gross",
"group",
"growth",
"gum",
"gun",
"h",
"hair",
"hallucinations",
"hallucinogen",
"hallucinogens",
"hallux",
"hamartoses",
"hamate",
"hammer",
"hand",
"handgun",
"hands",
"hanging",
"hard",
"hazardous",
"hazards",
"hbss",
"head",
"headache",
"healing",
"health",
"hearing",
"heart",
"heartburn",
"heat",
"heavyfordates",
"heel",
"helicobacter",
"hemangioma",
"hemarthrosis",
"hematemesis",
"hematocrit",
"hematological",
"hematoma",
"hematometra",
"hematopoietic",
"hematuria",
"hemiblock",
"hemiparesis",
"hemiplegia",
"hemivertebra",
"hemochromatosis",
"hemodialysis",
"hemoglobinopathies",
"hemoglobinuria",
"hemolysis",
"hemolytic",
"hemolyticuremic",
"hemopericardium",
"hemoperitoneum",
"hemophagocytic",
"hemophilia",
"hemophilus",
"hemophthalmos",
"hemoptysis",
"hemorrhage",
"hemorrhagic",
"hemorrhoidal",
"hemorrhoids",
"hemosiderosis",
"hemothorax",
"heparininduced",
"hepatic",
"hepatitis",
"hepatomegaly",
"hepatopulmonary",
"hepatorenal",
"hereditary",
"hernia",
"heroin",
"herpes",
"herpesvirus",
"herpetic",
"herpeticum",
"herpetiformis",
"hesitancy",
"heteronymous",
"heterotopic",
"heterotropia",
"hib",
"hiccough",
"hidradenitis",
"high",
"highrisk",
"highway",
"hiking",
"hip",
"hirschsprungs",
"hirsutism",
"histiocytic",
"histological",
"histologically",
"histoplasma",
"histoplasmosis",
"history",
"histrionic",
"hit",
"hiv",
"hiv2",
"hockey",
"hodgkins",
"hole",
"home",
"homicidal",
"homonymous",
"hordeolum",
"hormone",
"hormones",
"hornets",
"horseback",
"horticultural",
"hospital",
"hot",
"hour",
"hours",
"household",
"housing",
"hpv",
"htlvi",
"human",
"humerus",
"hunger",
"hungry",
"hunting",
"huntingtons",
"hydantoin",
"hydrate",
"hydrocele",
"hydrocephalus",
"hydrocyanic",
"hydronephrosis",
"hydrops",
"hydroureter",
"hydroxyquinoline",
"hygiene",
"hyperactivity",
"hyperacusis",
"hyperaldosteronism",
"hyperalimentation",
"hypercalcemia",
"hypercholesterolemia",
"hypercoagulable",
"hyperemesis",
"hyperfunction",
"hypergammaglobulinemia",
"hyperglyceridemia",
"hyperhidrosis",
"hyperlipidemia",
"hypernasality",
"hypernatremia",
"hyperosmolality",
"hyperosmolarity",
"hyperostosis",
"hyperparathyroidism",
"hyperpigmentation",
"hyperplasia",
"hyperpotassemia",
"hypersensitivity",
"hypersomnia",
"hypersplenism",
"hypertension",
"hypertensioncomplicating",
"hypertensive",
"hyperthermia",
"hypertonicity",
"hypertrophic",
"hypertrophy",
"hyperventilation",
"hyphema",
"hypnotic",
"hypnotics",
"hypocalcemia",
"hypochondriasis",
"hypodermic",
"hypofunction",
"hypogammaglobulinemia",
"hypogastric",
"hypoglossal",
"hypoglycemia",
"hypoinsulinemia",
"hyponatremia",
"hypoparathyroidism",
"hypopharynx",
"hypoplasia",
"hypopotassemia",
"hyposmolality",
"hypospadias",
"hypostasis",
"hypotension",
"hypothalamic",
"hypothermia",
"hypothyroidism",
"hypoventilation",
"hypoventilationhypoxemia",
"hypovolemia",
"hypoxemia",
"hysterectomy",
"iatrogenic",
"ice",
"ideation",
"identified",
"identity",
"idiopathic",
"iga",
"ii",
"iii",
"ileocolitis",
"ileostomy",
"ileum",
"ileus",
"iliac",
"ilium",
"illdefined",
"illness",
"immaturity",
"immediate",
"immune",
"immunity",
"immunization",
"immunodeficiency",
"immunoglobulin",
"immunological",
"immunoproliferative",
"immunosuppressive",
"immunotherapy",
"impact",
"impacted",
"impaction",
"impaired",
"impairment",
"imperfecta",
"impetigo",
"implant",
"implantable",
"implanted",
"implements",
"impotence",
"impulse",
"impulsiveness",
"inadequate",
"inappropriate",
"incidental",
"incisional",
"including",
"inclusion",
"income",
"incompatibility",
"incompetence",
"incomplete",
"incontinence",
"increased",
"index",
"individually",
"induced",
"industrial",
"indwelling",
"inertia",
"infant",
"infantile",
"infants",
"infarction",
"infected",
"infection",
"infections",
"infectious",
"infective",
"inferior",
"inferolateral",
"inferoposterior",
"infertility",
"infestation",
"infestations",
"infiltration",
"inflammation",
"inflammatory",
"inflicted",
"influences",
"influencing",
"influenza",
"influenzae",
"infrequent",
"infundibular",
"infusion",
"ingestion",
"ingrowing",
"inguinal",
"inhalation",
"inhibitors",
"initial",
"initiating",
"injection",
"injuries",
"injuring",
"injury",
"inline",
"innervation",
"innominate",
"inoculation",
"inph",
"insect",
"insects",
"insertion",
"insipidus",
"insomnia",
"instantaneous",
"institution",
"instrument",
"instruments",
"insufficiency",
"insulin",
"insulins",
"intellectual",
"intercostal",
"interferon",
"intermediate",
"intermittent",
"internal",
"internally",
"internuclear",
"internum",
"interphalangeal",
"interstitial",
"intertrochanteric",
"intervention",
"intervertebral",
"intestinal",
"intestine",
"intestines",
"intestinovesical",
"intoxication",
"intra",
"intraabdominal",
"intracerebral",
"intracranial",
"intractable",
"intraepithelial",
"intrahepatic",
"intramural",
"intraocular",
"intrapelvic",
"intraretinal",
"intraspinal",
"intrathoracic",
"intrauterine",
"intravenous",
"intraventricular",
"introduce",
"intussusception",
"inversion",
"inversus",
"involuntary",
"involvement",
"involving",
"iodine",
"iridocyclitis",
"iris",
"iron",
"irradiation",
"irregular",
"irritable",
"ischemia",
"ischemias",
"ischemic",
"ischium",
"islets",
"isoimmunization",
"isolated",
"isopropyl",
"isthmus",
"iv",
"ix",
"jaundice",
"jaw",
"jaws",
"jejunum",
"joint",
"joints",
"jugular",
"jumping",
"junction",
"juvenile",
"k",
"kaposis",
"keratitis",
"keratoconjunctivitis",
"keratoderma",
"keratopathy",
"keratosis",
"ketoacidosis",
"kidney",
"kinking",
"klebsiella",
"klinefelters",
"klippelfeil",
"knee",
"knives",
"known",
"kugelbergwelander",
"kwashiorkor",
"kyphoscoliosis",
"kyphosis",
"labor",
"labrum",
"labyrinthine",
"labyrinthitis",
"laceration",
"lacerationhemorrhage",
"lack",
"lacrimal",
"lactic",
"ladder",
"lagophthalmos",
"landing",
"landscaping",
"langerhans",
"language",
"laparoscopic",
"large",
"larger",
"laryngeal",
"laryngitis",
"larynx",
"last",
"late",
"latent",
"later",
"lateral",
"latex",
"lawn",
"laxity",
"ldh",
"lead",
"leak",
"leakage",
"learning",
"left",
"leftsided",
"leg",
"legal",
"legally",
"legionnaires",
"legs",
"leiomyoma",
"leishmaniasis",
"length",
"lens",
"leprosy",
"lesion",
"lesions",
"less",
"lesser",
"letterersiwe",
"leukemia",
"leukemic",
"leukemoid",
"leukocytes",
"leukocytopenia",
"leukocytosis",
"leukodystrophy",
"leukoencephalopathy",
"leukorrhea",
"level",
"levels",
"lewy",
"lichen",
"lichenification",
"lifestyle",
"lifting",
"ligament",
"ligaments",
"ligation",
"ligature",
"light",
"lightfordates",
"lightfordateswithout",
"limb",
"limbs",
"limited",
"lip",
"lipidoses",
"lipodystrophy",
"lipoid",
"lipoma",
"lipoprotein",
"lips",
"liquid",
"liquids",
"listeriosis",
"liveborn",
"liver",
"loads",
"lobe",
"lobes",
"local",
"localizationrelated",
"localized",
"location",
"lockedin",
"long",
"longitudinal",
"longterm",
"loose",
"loosening",
"lordosis",
"loss",
"louse",
"low",
"lower",
"lowerinner",
"lowerouter",
"lumbago",
"lumbar",
"lumbosacral",
"lump",
"lunate",
"lung",
"lupus",
"luteum",
"luts",
"lying",
"lyme",
"lymph",
"lymphadenitis",
"lymphangioleiomyomatosis",
"lymphangioma",
"lymphangitis",
"lymphatic",
"lymphedema",
"lymphocytic",
"lymphocytichistiocytic",
"lymphocytopenia",
"lymphocytosis",
"lymphoid",
"lymphoma",
"lymphomas",
"lymphoproliferative",
"lymphosarcoma",
"lymphotrophic",
"lysis",
"machine",
"machinery",
"machines",
"macrodactylia",
"macroglobulinemia",
"macrolides",
"macular",
"made",
"magnesium",
"magnum",
"main",
"maintaining",
"maintenance",
"major",
"mal",
"malabsorption",
"maladjustment",
"malaise",
"malar",
"malaria",
"malayan",
"male",
"malformation",
"malformations",
"malfunction",
"malignant",
"malleolus",
"malnutrition",
"malocclusion",
"malposition",
"malpresentation",
"maltreatment",
"malunion",
"mammary",
"mammogram",
"mammographic",
"management",
"mandible",
"manic",
"manifestation",
"manifestations",
"manmade",
"mantle",
"marasmus",
"marching",
"marfan",
"marginal",
"marital",
"markers",
"marrow",
"mass",
"massive",
"mast",
"mastectomy",
"mastodynia",
"mastoid",
"mastoiditis",
"mastopathy",
"material",
"maternal",
"matter",
"maxillary",
"means",
"measure",
"measurement",
"mechanical",
"mechanism",
"meckels",
"meconium",
"media",
"medial",
"median",
"mediastinitis",
"mediastinum",
"mediated",
"medical",
"medications",
"medicinal",
"medicines",
"mediterranean",
"medullary",
"megacolon",
"megakaryocytic",
"megaloblastic",
"melanoma",
"mellitus",
"member",
"membrane",
"membranes",
"membranoproliferative",
"membranous",
"memory",
"men",
"meninges",
"meningismus",
"meningitis",
"meningococcal",
"meningococcemia",
"meningoencephalitis",
"meniscus",
"menopausal",
"menopause",
"menorrhagia",
"menstrual",
"menstruation",
"mental",
"mention",
"meralgia",
"merkel",
"mesenteric",
"mesenteritis",
"metabolic",
"metabolism",
"metacarpal",
"metacarpophalangeal",
"metacarpus",
"metal",
"metals",
"metalworking",
"metaplasia",
"metatarsal",
"metatarsophalangeal",
"methadone",
"methemoglobinemia",
"methicillin",
"methods",
"metrorrhagia",
"microangiopathy",
"microcalcification",
"microcephalus",
"microorganisms",
"microscopic",
"microscopy",
"microtia",
"microvascular",
"midcarpal",
"midcervical",
"middle",
"midfoot",
"midline",
"migraine",
"migrainosus",
"migrans",
"mild",
"miliary",
"milk",
"mineral",
"mineralocorticoid",
"minor",
"minutes",
"miosis",
"miotics",
"mirabilis",
"misadventure",
"misadventures",
"mismanagement",
"missed",
"missile",
"mitochondrial",
"mitral",
"mixed",
"mnires",
"mobitz",
"moderate",
"molar",
"molluscum",
"monitoring",
"monoamine",
"monoarthritis",
"monoclonal",
"monocytic",
"monocytosis",
"mononeuritis",
"mononucleosis",
"monoplegia",
"monoxide",
"monteggias",
"mood",
"morbid",
"morganii",
"mother",
"motor",
"motorcycle",
"motorcyclist",
"motordriven",
"mouth",
"movement",
"movements",
"moving",
"mower",
"moyamoya",
"mucopurulent",
"mucormycosis",
"mucosa",
"mucosal",
"mucositis",
"mucous",
"multangular",
"multifocal",
"multiforme",
"multigravida",
"multinodular",
"multiple",
"multiplex",
"multisystemic",
"murmurs",
"muscle",
"muscles",
"muscletone",
"muscular",
"musculoskeletal",
"mushrooms",
"myalgia",
"myasthenia",
"myasthenic",
"mycetomas",
"mycobacteria",
"mycobacterial",
"mycoplasma",
"mycoses",
"mycosis",
"mycotic",
"mydriasis",
"mydriatics",
"myelitis",
"myelodysplastic",
"myelofibrosis",
"myeloid",
"myeloma",
"myelopathies",
"myelopathy",
"myelophthisis",
"myocardial",
"myocarditis",
"myoclonus",
"myogenic",
"myoglobinuria",
"myoneural",
"myopathies",
"myopathy",
"myopia",
"myositis",
"myotonia",
"myotonic",
"myringitis",
"nail",
"nails",
"named",
"napkin",
"narcissistic",
"narcolepsy",
"narcotic",
"narcotics",
"nasal",
"nasolacrimal",
"nasopharynx",
"native",
"natural",
"nature",
"nausea",
"navicular",
"nec",
"neck",
"necrolysis",
"necrosis",
"necrotizing",
"need",
"needle",
"negative",
"neglect",
"neighboring",
"neonatal",
"neoplasia",
"neoplasm",
"neoplasms",
"neoplastic",
"nephritis",
"nephrogenic",
"nephrolithiasis",
"nephropathy",
"nephrotic",
"nerve",
"nerves",
"nervosa",
"nervous",
"neural",
"neuralgia",
"neuritis",
"neuroendocrine",
"neurofibromatosis",
"neurogenic",
"neurohypophysis",
"neuroleptic",
"neuroleptics",
"neurologic",
"neurological",
"neuromyelitis",
"neuronitis",
"neuropacemaker",
"neuropathy",
"neurosyphilis",
"neutropenia",
"neutrophils",
"nevus",
"newborn",
"nigricans",
"nile",
"nipple",
"nocturia",
"node",
"nodes",
"nodosa",
"nodosum",
"nodular",
"nodule",
"nonabsorption",
"nonalcoholic",
"nonarthropodborne",
"nonautoimmune",
"nonautologous",
"noncollision",
"noncompliance",
"nonconvulsive",
"nondominant",
"nonexudative",
"nonfatal",
"nonhealing",
"nonhemolytic",
"noninfectious",
"noninflammatory",
"nonmagnetic",
"nonmedicinal",
"nonmotorized",
"nonnarcotic",
"nonneoplastic",
"nonobstructive",
"nonorganic",
"nonpetroleumbased",
"nonproliferative",
"nonpsychotic",
"nonpyogenic",
"nonrelated",
"nonrenal",
"nonrheumatic",
"nonruptured",
"nonspeaking",
"nonspecific",
"nonsteroidal",
"nonsuppurative",
"nonteratogenic",
"nonthrombocytopenic",
"nontoxic",
"nontraffic",
"nontraumatic",
"nonunion",
"nonvenomous",
"normal",
"norwalk",
"nos",
"nose",
"noxious",
"nsaid",
"nuclear",
"nutritional",
"nuts",
"nystagmus",
"obesity",
"object",
"objective",
"objects",
"obliterans",
"observation",
"obsessivecompulsive",
"obstetrical",
"obstruction",
"obstructive",
"occipital",
"occlusion",
"occulta",
"occupant",
"occurring",
"ocular",
"oculomotor",
"odontogenic",
"offroad",
"oils",
"old",
"olecranon",
"oligohydramnios",
"oliguria",
"one",
"onset",
"onychia",
"oophoritis",
"opacities",
"open",
"openangle",
"opening",
"operation",
"operations",
"ophthalmic",
"ophthalmological",
"ophthalmoplegia",
"opiate",
"opiates",
"opioid",
"opium",
"opportunistic",
"oppositional",
"optic",
"optica",
"oral",
"orbit",
"orbital",
"orchitis",
"organ",
"organic",
"organism",
"organisms",
"organizing",
"organophosphate",
"organs",
"orifice",
"origin",
"originating",
"orofacial",
"oropharyngeal",
"oropharynx",
"orthopedic",
"orthopnea",
"orthostatic",
"os",
"osseous",
"ossification",
"osteitis",
"osteoarthrosis",
"osteochondrosis",
"osteodystrophy",
"osteogenesis",
"osteolysis",
"osteomalacia",
"osteomyelitis",
"osteoporosis",
"ostium",
"otalgia",
"otherwise",
"otitis",
"otogenic",
"otorhinolaryngological",
"otorrhea",
"otosclerosis",
"outcome",
"ovarian",
"ovaries",
"ovary",
"overactivity",
"overexertion",
"overflow",
"overlap",
"overload",
"overweight",
"oxazolidine",
"oxidase",
"oxygen",
"pacemaker",
"packing",
"pain",
"painful",
"paintball",
"paints",
"palate",
"palindromic",
"palliative",
"pallor",
"palm",
"palmar",
"palpitations",
"palsies",
"palsy",
"pancreas",
"pancreatic",
"pancreatitis",
"pancytopenia",
"panhypopituitarism",
"panic",
"panniculitis",
"panophthalmitis",
"panuveitis",
"papanicolaou",
"papillae",
"papillary",
"papilledema",
"papillomavirus",
"paraganglia",
"parainfluenza",
"paralysis",
"paralytic",
"parametritis",
"paranoid",
"parapharyngeal",
"paraphrenia",
"paraplegia",
"paraproteinemia",
"paraproteinemias",
"parapsoriasis",
"parasitic",
"parasympatholytics",
"parasympathomimetics",
"parathyroid",
"parenchyma",
"parenchymal",
"parentchild",
"paresthetica",
"parietal",
"parietoalveolar",
"parkinsonism",
"paronychia",
"parotid",
"paroxysmal",
"part",
"partial",
"partialis",
"participant",
"partner",
"parts",
"parvovirus",
"passage",
"passages",
"passenger",
"passive",
"pasteurellosis",
"pataus",
"patella",
"patellar",
"patent",
"pathogens",
"pathologic",
"pathological",
"pathways",
"patient",
"patients",
"pauciarticular",
"pay",
"peanuts",
"pectoris",
"pectus",
"pedal",
"pedestrian",
"pedicle",
"pediculosis",
"pediculus",
"pellagra",
"pelvic",
"pelvis",
"pemphigoid",
"pemphigus",
"penetrating",
"penetration",
"penicillin",
"penicillins",
"penis",
"peptic",
"percent",
"percutaneous",
"perforation",
"performance",
"perfringens",
"perfusion",
"perianal",
"periapical",
"pericarditis",
"pericardium",
"perichondritis",
"perinatal",
"perineal",
"perinephric",
"perineum",
"periocular",
"period",
"periodic",
"periodontal",
"periodontitis",
"periodontosis",
"peripartum",
"peripheral",
"periprosthetic",
"peristalsis",
"peritoneal",
"peritoneum",
"peritonitis",
"peritonsillar",
"periumbilic",
"pernicious",
"peroneal",
"perpetrator",
"persistent",
"persisting",
"person",
"personal",
"personality",
"persons",
"pertussis",
"pervasive",
"pes",
"pesticides",
"petit",
"petrositis",
"peyronies",
"phalanges",
"phalanx",
"phantom",
"pharmaceutical",
"pharyngeal",
"pharyngitis",
"pharyngoesophageal",
"pharynx",
"phase",
"phenomena",
"phenothiazinebased",
"phimosis",
"phlebitis",
"phobias",
"phosphate",
"phosphorus",
"photokeratitis",
"phycomycosis",
"physical",
"physiological",
"phytonadione",
"pica",
"piercing",
"pigment",
"pigmentary",
"pill",
"pilonidal",
"pineal",
"pinna",
"pisiform",
"pituitary",
"pityriasis",
"place",
"placenta",
"placentae",
"placental",
"places",
"planned",
"plant",
"plantar",
"plants",
"planus",
"plaque",
"plasma",
"platelet",
"played",
"playground",
"pleura",
"pleural",
"pleurisy",
"plexus",
"plexusblocking",
"pneumococcal",
"pneumococcus",
"pneumoconiosis",
"pneumocystosis",
"pneumogastric",
"pneumohemothorax",
"pneumonia",
"pneumoniae",
"pneumonitis",
"pneumonopathies",
"pneumonopathy",
"pneumothorax",
"poisoning",
"polio",
"poliomyelitis",
"poliovirus",
"polishing",
"pollen",
"polyarteritis",
"polyarthritis",
"polyarthropathies",
"polyarthropathy",
"polyarticular",
"polyclonal",
"polycystic",
"polycythemia",
"polydipsia",
"polyglandular",
"polyhydramnios",
"polymorphonuclear",
"polymyalgia",
"polymyositis",
"polyneuritis",
"polyneuropathy",
"polyp",
"polyphagia",
"polyps",
"polyuria",
"pool",
"poor",
"poorly",
"popliteal",
"porphyrin",
"portal",
"portion",
"position",
"positional",
"positive",
"post",
"postablative",
"postcholecystectomy",
"postconcussion",
"postductal",
"posterior",
"postgastric",
"posthemorrhagic",
"postherpetic",
"postinfection",
"postinfectious",
"postinflammatory",
"postlaminectomy",
"postmastectomy",
"postmenopausal",
"postmyocardial",
"postnasal",
"postoperative",
"postpartum",
"postphlebetic",
"postprocedural",
"postsurgical",
"postthoracotomy",
"posttransplant",
"posttraumatic",
"postural",
"postvaricella",
"potentially",
"pouchitis",
"powered",
"praderwilli",
"pre",
"precerebral",
"precipitate",
"precipitous",
"precordial",
"predominance",
"predominant",
"predominantly",
"preductal",
"preeclampsia",
"preexisting",
"preglaucoma",
"pregnancies",
"pregnancy",
"pregnant",
"premature",
"prematurity",
"premenopausal",
"premenstrual",
"premises",
"preoperative",
"preparations",
"prepatellar",
"prepuce",
"presbyopia",
"prescription",
"presence",
"presenile",
"present",
"presentation",
"presenting",
"pressure",
"presumed",
"preterm",
"previa",
"previous",
"priapism",
"prickly",
"primarily",
"primary",
"primigravida",
"primum",
"prinzmetal",
"prior",
"private",
"problem",
"problems",
"procedure",
"procedures",
"process",
"proctitis",
"proctosigmoiditis",
"products",
"profile",
"profound",
"progressive",
"prolapse",
"prolapsed",
"proliferative",
"prolonged",
"prophylactic",
"propionic",
"prostate",
"prostatitis",
"prosthesis",
"prosthetic",
"protectants",
"protein",
"proteincalorie",
"proteinosis",
"proteinuria",
"proteus",
"protozoa",
"protrusion",
"proximal",
"prurigo",
"pruritic",
"pruritus",
"psa",
"pseudobulbar",
"pseudocyst",
"pseudoexfoliation",
"pseudomonas",
"pseudopolyposis",
"psoas",
"psoriasis",
"psoriatic",
"psychiatric",
"psychic",
"psychodysleptics",
"psychogenic",
"psychological",
"psychomotor",
"psychophysical",
"psychophysiological",
"psychosexual",
"psychosis",
"psychostimulant",
"psychostimulants",
"psychotic",
"psychotropic",
"pterygium",
"ptld",
"ptosis",
"pubis",
"public",
"puerperal",
"puerperium",
"pulmonale",
"pulmonary",
"pulmonic",
"pulpal",
"pulsating",
"pump",
"puncture",
"pupillary",
"pure",
"purine",
"purposely",
"purposes",
"purpura",
"purpuras",
"purulent",
"pushing",
"pyelonephritis",
"pyemia",
"pyemic",
"pylori",
"pyloric",
"pylorospasm",
"pylorus",
"pyoderma",
"pyogenic",
"pyrexia",
"pyriform",
"pyrophosphate",
"qt",
"quadrant",
"quadriplegia",
"qualitative",
"quinoline",
"quinolones",
"rabies",
"radial",
"radiation",
"radiculitis",
"radiocarpal",
"radiographic",
"radiological",
"radiotherapy",
"radioulnar",
"radius",
"railway",
"ramus",
"rape",
"rapidly",
"rash",
"rate",
"raynauds",
"reaction",
"reactions",
"reactive",
"reading",
"reasons",
"reattached",
"recent",
"receptor",
"recessive",
"recklinghausens",
"reconstruction",
"recreation",
"recreational",
"rectal",
"rectocele",
"rectosigmoid",
"rectovaginal",
"rectum",
"recurrent",
"red",
"redness",
"reduction",
"redundant",
"reentrant",
"referable",
"referred",
"reflex",
"reflux",
"refusal",
"region",
"regional",
"regions",
"regulation",
"regulators",
"reiters",
"relapse",
"related",
"relative",
"relaxants",
"religion",
"rem",
"remission",
"removal",
"remove",
"removed",
"renal",
"render",
"renovascular",
"repair",
"repeated",
"repetitive",
"replaced",
"replacement",
"residential",
"residual",
"resistance",
"resistant",
"resonance",
"resources",
"respiration",
"respirator",
"respiratory",
"response",
"rest",
"restless",
"restorative",
"restraints",
"resulting",
"results",
"resuscitate",
"retained",
"retention",
"reticuloendotheliosis",
"reticulosarcoma",
"retina",
"retinal",
"retinitis",
"retinochoroiditis",
"retinopathy",
"retrolental",
"retroperitoneal",
"retroperitoneum",
"retropharyngeal",
"return",
"reuptake",
"rh",
"rhabdomyolysis",
"rhesus",
"rheumatic",
"rheumatica",
"rheumatism",
"rheumatoid",
"rhinitis",
"rhinorrhea",
"rhinovirus",
"rhythm",
"rib",
"ribs",
"rickettsial",
"rickettsiosis",
"ridden",
"rider",
"riding",
"rifle",
"right",
"rigidity",
"ritters",
"ritual",
"rls",
"road",
"rodenticides",
"roller",
"rolling",
"root",
"roots",
"rosacea",
"rosea",
"rotator",
"rotavirus",
"routine",
"rsv",
"rtpa",
"running",
"rupture",
"ruptured",
"sac",
"sacral",
"sacroiliac",
"sacroiliitis",
"sacrum",
"sacs",
"saddle",
"salicylates",
"salivary",
"salmonella",
"salpingitis",
"saluretics",
"sampling",
"saphenous",
"sarcoidosis",
"sarcoma",
"satiety",
"scabies",
"scaffolding",
"scalp",
"scanty",
"scaphoid",
"scapula",
"scapular",
"scar",
"scd",
"schilders",
"schistosomiasis",
"schizoaffective",
"schizoid",
"schizophrenia",
"schizophrenic",
"schizophreniform",
"schizotypal",
"schmorls",
"schwannomatosis",
"sciatic",
"sciatica",
"scleritis",
"scleroderma",
"sclerosing",
"sclerosis",
"scoliosis",
"scooter",
"scotoma",
"screening",
"scrotum",
"sde",
"seafood",
"seated",
"sebaceous",
"seborrheic",
"second",
"secondary",
"seconddegree",
"secretion",
"section",
"secundum",
"sedation",
"sedative",
"sedatives",
"sedimentation",
"seeds",
"seizures",
"selective",
"selfinflicted",
"semilunar",
"seminal",
"senile",
"sensation",
"sensations",
"senses",
"sensorineural",
"sensory",
"separation",
"sepsis",
"septal",
"septic",
"septicemia",
"septicemias",
"septum",
"sequelae",
"sequestration",
"seroma",
"serotonin",
"serous",
"serratia",
"serum",
"seven",
"seventh",
"several",
"severe",
"severity",
"sex",
"sexual",
"sezarys",
"shaft",
"shape",
"sharp",
"sheath",
"shigella",
"shigellosis",
"shock",
"short",
"shortness",
"shotgun",
"shoulder",
"shoving",
"shunt",
"sialoadenitis",
"sialolithiasis",
"sicca",
"sick",
"sicklecell",
"sicklecellhbc",
"side",
"sidebody",
"sidewalk",
"sigmoid",
"significance",
"signs",
"silica",
"silicates",
"similar",
"simple",
"simplex",
"single",
"sinoatrial",
"sinus",
"sinuses",
"sinusitis",
"site",
"sites",
"situ",
"situs",
"six",
"sixth",
"skateboard",
"skateboarding",
"skates",
"skating",
"skeletal",
"skier",
"skiing",
"skin",
"skis",
"skull",
"sledding",
"sleep",
"slipping",
"slow",
"slowing",
"small",
"smaller",
"smear",
"smell",
"smoke",
"smooth",
"snow",
"snowboard",
"social",
"soft",
"solar",
"solid",
"solids",
"solitary",
"solvents",
"somatoform",
"sore",
"sound",
"source",
"sources",
"space",
"spasm",
"spasmolytics",
"spastic",
"special",
"specific",
"specification",
"specified",
"spectator",
"speech",
"spermatic",
"sphenoidal",
"spherocytosis",
"sphincter",
"spiders",
"spina",
"spinal",
"spine",
"spinocerebellar",
"spleen",
"splenic",
"splenomegaly",
"splinter",
"spondylitis",
"spondylolisthesis",
"spondylolysis",
"spondylopathy",
"spondylosis",
"sponge",
"spontaneous",
"sport",
"sports",
"spousal",
"spouse",
"sprain",
"sprains",
"spur",
"sputum",
"squamous",
"stage",
"stages",
"stairs",
"staphylococcal",
"staphylococcus",
"state",
"stated",
"states",
"stationary",
"stature",
"status",
"steal",
"steam",
"stem",
"stenosis",
"stepfather",
"steps",
"sterilization",
"sternal",
"sternum",
"steroid",
"steroids",
"stevensjohnson",
"stiffman",
"stiffness",
"stillborn",
"stimulants",
"sting",
"stock",
"stoma",
"stomach",
"stomatitis",
"stool",
"storm",
"strabismic",
"strabismus",
"straightchain",
"straining",
"strains",
"strangulation",
"stream",
"street",
"strenuous",
"streptococcal",
"streptococcus",
"stress",
"striae",
"stricture",
"stridor",
"striking",
"stroke",
"strongyloidiasis",
"struck",
"structure",
"structures",
"study",
"stumbling",
"stump",
"styloid",
"subacute",
"subaortic",
"subarachnoid",
"subchronic",
"subclavian",
"subcondylar",
"subcutaneous",
"subdural",
"subendocardial",
"subglottis",
"subluxation",
"submersion",
"submucous",
"subsequent",
"subserous",
"substance",
"substances",
"substitutes",
"subtotal",
"subtrochanteric",
"sudden",
"suffocation",
"suicidal",
"suicide",
"sulfonamides",
"sulphurbearing",
"sunstroke",
"superficial",
"superimposed",
"superior",
"supervision",
"supplemental",
"supporting",
"suppurative",
"supracondylar",
"supraglottis",
"supraglottitis",
"supraspinatus",
"supraventricular",
"surface",
"surgery",
"surgical",
"surgically",
"surveillance",
"susceptibility",
"susceptible",
"suspected",
"suture",
"swelling",
"swimming",
"swords",
"symbolic",
"sympathetic",
"sympatholytics",
"sympathomimetic",
"sympathomimetics",
"symphysis",
"symptom",
"symptomatic",
"symptoms",
"syncope",
"syncytial",
"syndrome",
"syndromes",
"syndrometoxic",
"synovial",
"synovitis",
"synovium",
"synthetic",
"syphilis",
"syphilitic",
"syringobulbia",
"syringomyelia",
"system",
"systemic",
"systems",
"systolic",
"t1t6",
"t7t12",
"tabes",
"tachycardia",
"tachypnea",
"tackle",
"tags",
"tail",
"takayasus",
"taken",
"takeoff",
"takotsubo",
"talipes",
"tamponade",
"tap",
"tarsal",
"tarsometatarsal",
"tarsus",
"taste",
"tcell",
"td",
"tear",
"tears",
"teeth",
"telangiectasia",
"temperature",
"temporal",
"temporomandibular",
"tenderness",
"tendineae",
"tendinitis",
"tendon",
"tendons",
"tenosynovitis",
"tension",
"term",
"terrorism",
"tertian",
"test",
"testes",
"testicular",
"testis",
"tetanus",
"tetanusdiphtheria",
"tetany",
"tetracycline",
"tetralogy",
"texture",
"thalassemia",
"therapeutic",
"therapy",
"thiamine",
"thigh",
"third",
"thirdstage",
"thoracic",
"thoracoabdominal",
"thoracogenic",
"thoracolumbar",
"thoracoscopic",
"thorax",
"threatened",
"three",
"thrive",
"throat",
"thromboangiitis",
"thrombocythemia",
"thrombocytopenia",
"thrombocytopeniaunspecified",
"thrombocytopenic",
"thrombophlebitis",
"thrombosed",
"thrombosis",
"thrombotic",
"thrown",
"thumb",
"thymus",
"thyroid",
"thyroiditis",
"thyrotoxic",
"thyrotoxicosis",
"tia",
"tibia",
"tibial",
"tibialis",
"tibiofibular",
"tietzes",
"time",
"tinnitus",
"tissue",
"tissues",
"tobacco",
"tobogganing",
"toe",
"toes",
"tolerance",
"tongue",
"tonsil",
"tonsillar",
"tonsillitis",
"tonsils",
"tools",
"tooth",
"tophi",
"tophus",
"topical",
"topically",
"tornado",
"torsion",
"torticollis",
"torus",
"total",
"touch",
"tourettes",
"toxic",
"toxicological",
"toxoid",
"toxoplasmosis",
"tpa",
"trachea",
"tracheitis",
"tracheoesophageal",
"tracheostomy",
"tract",
"tractskin",
"traffic",
"train",
"trait",
"trali",
"tranquilizers",
"transaminase",
"transcervical",
"transfusion",
"transfusions",
"transient",
"transit",
"transitory",
"transluminal",
"transmission",
"transplant",
"transplanted",
"transport",
"transposition",
"transsexualism",
"transverse",
"trapezium",
"trapezoid",
"trauma",
"traumatic",
"treatment",
"tree",
"tremor",
"trial",
"triatriatum",
"trichomonal",
"trichomoniasis",
"trichuriasis",
"tricuspid",
"tricyclic",
"trifascicular",
"trigeminal",
"trigger",
"trigone",
"trimalleolar",
"tripping",
"triquetral",
"trochanteric",
"trochlear",
"true",
"trunk",
"tubal",
"tube",
"tubercle",
"tuberculin",
"tuberculoma",
"tuberculosis",
"tuberculous",
"tuberosity",
"tuberous",
"tubes",
"tubing",
"tubular",
"tularemia",
"tumor",
"tumors",
"tunnel",
"turbinates",
"twin",
"twins",
"two",
"twothirds",
"tympanic",
"type",
"types",
"ulcer",
"ulceration",
"ulcerative",
"ulna",
"ulnar",
"ultraviolet",
"umbilical",
"unarmed",
"unavailability",
"uncertain",
"unciform",
"uncomplicated",
"uncontrolled",
"undescended",
"undetermined",
"undiagnosed",
"unemployment",
"unequal",
"unilateral",
"uninodular",
"universal",
"unknown",
"unqualified",
"unspecified",
"unspecifiednot",
"unspecifiedrecurrent",
"unstageable",
"upper",
"upperouter",
"urea",
"ureter",
"ureteral",
"ureteric",
"ureteropelvic",
"ureterovesical",
"urethra",
"urethral",
"urethritis",
"urge",
"urgency",
"uric",
"urinary",
"urinarygenital",
"urination",
"urine",
"urogenital",
"urticaria",
"usa",
"use",
"used",
"uteri",
"uterine",
"uterovaginal",
"uterus",
"v",
"vaccination",
"vaccinations",
"vaccines",
"vagina",
"vaginal",
"vaginismus",
"vaginitis",
"valgus",
"vallecula",
"valve",
"valves",
"vapor",
"vapors",
"variable",
"variant",
"variants",
"varicella",
"varices",
"varicose",
"varnishes",
"vascular",
"vasectomy",
"vasodilators",
"vater",
"vault",
"vegetables",
"vegetative",
"vehicle",
"vehicles",
"vein",
"veins",
"vena",
"venereal",
"venom",
"venomous",
"venous",
"ventilation",
"ventilator",
"ventral",
"ventricles",
"ventricular",
"vera",
"vermiformis",
"vermilion",
"versicolor",
"version",
"vertebra",
"vertebrae",
"vertebral",
"vertebrobasilar",
"vertiginous",
"vertigo",
"vesicant",
"vesicoureteral",
"vesiculitis",
"vessel",
"vessels",
"vestibular",
"via",
"vibrio",
"victim",
"viii",
"villonodular",
"vin",
"viral",
"viremia",
"virus",
"viruses",
"visceral",
"visible",
"vision",
"visual",
"visuospatial",
"vitamin",
"vitamins",
"vitiligo",
"vitreous",
"vocal",
"voice",
"volume",
"volvulus",
"vomiting",
"vomitus",
"von",
"vulnificus",
"vulva",
"vulvar",
"vulvodynia",
"vulvovaginitis",
"wake",
"walking",
"wall",
"walls",
"wandering",
"warts",
"wasps",
"wasting",
"water",
"waterbalance",
"watercraft",
"waterskiing",
"weakening",
"weakness",
"weather",
"web",
"weeks",
"wegeners",
"weight",
"west",
"wheelchair",
"wheezing",
"whether",
"white",
"whole",
"whooping",
"willebrands",
"wiring",
"withdrawal",
"within",
"without",
"woodworking",
"workers",
"wound",
"wounds",
"wrist",
"writers",
"wrong",
"x",
"xerotica",
"xi",
"zone",
"zoonotic",
"zoster",
"zygomycosis"
] | ---
language: "en"
tags:
- bert
- medical
- clinical
- diagnosis
- text-classification
thumbnail: "https://core.app.datexis.com/static/paper.png"
widget:
- text: "Patient with hypertension presents to ICU."
---
# CORe Model - Clinical Diagnosis Prediction
## Model description
The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf).
It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.
This model checkpoint is **fine-tuned on the task of diagnosis prediction**.
The model expects patient admission notes as input and outputs multi-label ICD9-code predictions.
#### Model Predictions
The model makes predictions on a total of 9237 labels. These contain 3- and 4-digit ICD9 codes and textual descriptions of these codes. The 4-digit codes and textual descriptions help to incorporate further topical and hierarchical information into the model during training (see Section 4.2 _ICD+: Incorporation of ICD Hierarchy_ in our paper). We recommend to only use the **3-digit code predictions at inference time**, because only those have been evaluated in our work.
#### How to use CORe Diagnosis Prediction
You can load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-diagnosis-prediction")
model = AutoModelForSequenceClassification.from_pretrained("bvanaken/CORe-clinical-diagnosis-prediction")
```
The following code shows an inference example:
```
input = "CHIEF COMPLAINT: Headaches\n\nPRESENT ILLNESS: 58yo man w/ hx of hypertension, AFib on coumadin presented to ED with the worst headache of his life."
tokenized_input = tokenizer(input, return_tensors="pt")
output = model(**tokenized_input)
import torch
predictions = torch.sigmoid(output.logits)
predicted_labels = [model.config.id2label[_id] for _id in (predictions > 0.3).nonzero()[:, 1].tolist()]
```
Note: For the best performance, we recommend to determine the thresholds (0.3 in this example) individually per label.
### More Information
For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/).
### Cite
```bibtex
@inproceedings{vanaken21,
author = {Betty van Aken and
Jens-Michalis Papaioannou and
Manuel Mayrdorfer and
Klemens Budde and
Felix A. Gers and
Alexander Löser},
title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration},
booktitle = {Proceedings of the 16th Conference of the European Chapter of the
Association for Computational Linguistics: Main Volume, {EACL} 2021,
Online, April 19 - 23, 2021},
publisher = {Association for Computational Linguistics},
year = {2021},
}
``` |
807 | DATEXIS/CORe-clinical-mortality-prediction | [
"0",
"1"
] | ---
language: "en"
tags:
- bert
- medical
- clinical
- mortality
thumbnail: "https://core.app.datexis.com/static/paper.png"
---
# CORe Model - Clinical Mortality Risk Prediction
## Model description
The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf).
It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.
This model checkpoint is **fine-tuned on the task of mortality risk prediction**.
The model expects patient admission notes as input and outputs the predicted risk of in-hospital mortality.
#### How to use CORe Mortality Risk Prediction
You can load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-mortality-prediction")
model = AutoModelForSequenceClassification.from_pretrained("bvanaken/CORe-clinical-mortality-prediction")
```
The following code shows an inference example:
```
input = "CHIEF COMPLAINT: Headaches\n\nPRESENT ILLNESS: 58yo man w/ hx of hypertension, AFib on coumadin presented to ED with the worst headache of his life."
tokenized_input = tokenizer(input, return_tensors="pt")
output = model(**tokenized_input)
import torch
predictions = torch.softmax(output.logits.detach(), dim=1)
mortality_risk_prediction = predictions[0][1].item()
```
### More Information
For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/).
### Cite
```bibtex
@inproceedings{vanaken21,
author = {Betty van Aken and
Jens-Michalis Papaioannou and
Manuel Mayrdorfer and
Klemens Budde and
Felix A. Gers and
Alexander Löser},
title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration},
booktitle = {Proceedings of the 16th Conference of the European Chapter of the
Association for Computational Linguistics: Main Volume, {EACL} 2021,
Online, April 19 - 23, 2021},
publisher = {Association for Computational Linguistics},
year = {2021},
}
``` |
808 | bvanaken/clinical-assertion-negation-bert | [
"PRESENT",
"ABSENT",
"POSSIBLE"
] | ---
language: "en"
tags:
- bert
- medical
- clinical
- assertion
- negation
- text-classification
widget:
- text: "Patient denies [entity] SOB [entity]."
---
# Clinical Assertion / Negation Classification BERT
## Model description
The Clinical Assertion and Negation Classification BERT is introduced in the paper [Assertion Detection in Clinical Notes: Medical Language Models to the Rescue?
](https://aclanthology.org/2021.nlpmc-1.5/). The model helps structure information in clinical patient letters by classifying medical conditions mentioned in the letter into PRESENT, ABSENT and POSSIBLE.
The model is based on the [ClinicalBERT - Bio + Discharge Summary BERT Model](https://huggingface.co/emilyalsentzer/Bio_Discharge_Summary_BERT) by Alsentzer et al. and fine-tuned on assertion data from the [2010 i2b2 challenge](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3168320/).
#### How to use the model
You can load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
tokenizer = AutoTokenizer.from_pretrained("bvanaken/clinical-assertion-negation-bert")
model = AutoModelForSequenceClassification.from_pretrained("bvanaken/clinical-assertion-negation-bert")
```
The model expects input in the form of spans/sentences with one marked entity to classify as `PRESENT(0)`, `ABSENT(1)` or `POSSIBLE(2)`. The entity in question is identified with the special token `[entity]` surrounding it.
Example input and inference:
```
input = "The patient recovered during the night and now denies any [entity] shortness of breath [entity]."
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
classification = classifier(input)
# [{'label': 'ABSENT', 'score': 0.9842607378959656}]
```
### Cite
When working with the model, please cite our paper as follows:
```bibtex
@inproceedings{van-aken-2021-assertion,
title = "Assertion Detection in Clinical Notes: Medical Language Models to the Rescue?",
author = "van Aken, Betty and
Trajanovska, Ivana and
Siu, Amy and
Mayrdorfer, Manuel and
Budde, Klemens and
Loeser, Alexander",
booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.nlpmc-1.5",
doi = "10.18653/v1/2021.nlpmc-1.5"
}
``` |
811 | cardiffnlp/bertweet-base-emoji | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | |
812 | cardiffnlp/bertweet-base-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | |
816 | cardiffnlp/bertweet-base-sentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | |
817 | cardiffnlp/bertweet-base-stance-abortion | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | |
818 | cardiffnlp/bertweet-base-stance-atheism | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | |
819 | cardiffnlp/bertweet-base-stance-climate | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | |
820 | cardiffnlp/bertweet-base-stance-feminist | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | |
821 | cardiffnlp/bertweet-base-stance-hillary | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | |
822 | cardiffnlp/twitter-roberta-base-emoji | [
"❤",
"😍",
"😂",
"💕",
"🔥",
"😊",
"😎",
"✨",
"💙",
"😘",
"📷",
"🇺🇸",
"☀",
"💜",
"😉",
"💯",
"😁",
"🎄",
"📸",
"😜"
] | # Twitter-roBERTa-base for Emoji prediction
This is a roBERTa-base model trained on ~58M tweets and finetuned for emoji prediction with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='emoji'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Looking forward to Christmas"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Looking forward to Christmas"
# text = preprocess(text)
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) 🎄 0.5457
2) 😊 0.1417
3) 😁 0.0649
4) 😍 0.0395
5) ❤️ 0.03
6) 😜 0.028
7) ✨ 0.0263
8) 😉 0.0237
9) 😂 0.0177
10) 😎 0.0166
11) 😘 0.0143
12) 💕 0.014
13) 💙 0.0076
14) 💜 0.0068
15) 🔥 0.0065
16) 💯 0.004
17) 🇺🇸 0.0037
18) 📷 0.0034
19) ☀ 0.0033
20) 📸 0.0021
```
|
823 | cardiffnlp/twitter-roberta-base-emotion | [
"joy",
"optimism",
"anger",
"sadness"
] | # Twitter-roBERTa-base for Emotion Recognition
This is a RoBERTa-base model trained on ~58M tweets and finetuned for emotion recognition with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
<b>New!</b> We just released a new emotion recognition model trained with more emotion types and with a newer RoBERTa-based model.
See [twitter-roberta-base-emotion-multilabel-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion-multilabel-latest) and [TweetNLP](https://github.com/cardiffnlp/tweetnlp) for more details.
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='emotion'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Celebrating my promotion 😎"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Celebrating my promotion 😎"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) joy 0.9382
2) optimism 0.0362
3) anger 0.0145
4) sadness 0.0112
```
|
824 | cardiffnlp/twitter-roberta-base-hate | [
"non-hate",
"hate"
] | # Twitter-roBERTa-base for Hate Speech Detection
This is a roBERTa-base model trained on ~58M tweets and finetuned for hate speech detection with the TweetEval benchmark.
This model is specialized to detect hate speech against women and immigrants.
**NEW!** We have made available a more recent and robust hate speech detection model here: [https://huggingface.co/cardiffnlp/twitter-roberta-base-hate-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-hate-latest)
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='hate'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) not-hate 0.9168
2) hate 0.0832
```
|
825 | cardiffnlp/twitter-roberta-base-irony | [
"non_irony",
"irony"
] | # Twitter-roBERTa-base for Irony Detection
This is a roBERTa-base model trained on ~58M tweets and finetuned for irony detection with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = [
]
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='irony'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Great, it broke the first day..."
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Great, it broke the first day..."
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) irony 0.914
2) non_irony 0.086
```
|
826 | cardiffnlp/twitter-roberta-base-offensive | [
"non-offensive",
"offensive"
] | # Twitter-roBERTa-base for Offensive Language Identification
This is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='offensive'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) not-offensive 0.9073
2) offensive 0.0927
```
|
827 | cardiffnlp/twitter-roberta-base-sentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
datasets:
- tweet_eval
language:
- en
---
# Twitter-roBERTa-base for Sentiment Analysis
This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English (for a similar multilingual model, see [XLM-T](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment)).
- Reference Paper: [_TweetEval_ (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
<b>Labels</b>:
0 -> Negative;
1 -> Neutral;
2 -> Positive
<b>New!</b> We just released a new sentiment analysis model trained on more recent and a larger quantity of tweets.
See [twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) and [TweetNLP](https://tweetnlp.org) for more details.
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='sentiment'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) positive 0.8466
2) neutral 0.1458
3) negative 0.0076
```
### BibTeX entry and citation info
Please cite the [reference paper](https://aclanthology.org/2020.findings-emnlp.148/) if you use this model.
```bibtex
@inproceedings{barbieri-etal-2020-tweeteval,
title = "{T}weet{E}val: Unified Benchmark and Comparative Evaluation for Tweet Classification",
author = "Barbieri, Francesco and
Camacho-Collados, Jose and
Espinosa Anke, Luis and
Neves, Leonardo",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.148",
doi = "10.18653/v1/2020.findings-emnlp.148",
pages = "1644--1650"
}
``` |
828 | cardiffnlp/twitter-roberta-base-stance-abortion | [
"none",
"against",
"favor"
] | |
829 | cardiffnlp/twitter-roberta-base-stance-atheism | [
"none",
"against",
"favor"
] | |
830 | cardiffnlp/twitter-roberta-base-stance-climate | [
"none",
"against",
"favor"
] | |
831 | cardiffnlp/twitter-roberta-base-stance-feminist | [
"none",
"against",
"favor"
] | |
832 | cardiffnlp/twitter-roberta-base-stance-hillary | [
"none",
"against",
"favor"
] | |
833 | cardiffnlp/twitter-xlm-roberta-base-sentiment | [
"negative",
"neutral",
"positive"
] | ---
language: multilingual
widget:
- text: "🤗"
- text: "T'estimo! ❤️"
- text: "I love you!"
- text: "I hate you 🤮"
- text: "Mahal kita!"
- text: "사랑해!"
- text: "난 너가 싫어"
- text: "😍😍😍"
---
# twitter-XLM-roBERTa-base for Sentiment Analysis
This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and finetuned for sentiment analysis. The sentiment fine-tuning was done on 8 languages (Ar, En, Fr, De, Hi, It, Sp, Pt) but it can be used for more languages (see paper for details).
- Paper: [XLM-T: A Multilingual Language Model Toolkit for Twitter](https://arxiv.org/abs/2104.12250).
- Git Repo: [XLM-T official repository](https://github.com/cardiffnlp/xlm-t).
This model has been integrated into the [TweetNLP library](https://github.com/cardiffnlp/tweetnlp).
## Example Pipeline
```python
from transformers import pipeline
model_path = "cardiffnlp/twitter-xlm-roberta-base-sentiment"
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("T'estimo!")
```
```
[{'label': 'Positive', 'score': 0.6600581407546997}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"cardiffnlp/twitter-xlm-roberta-base-sentiment"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) Positive 0.7673
2) Neutral 0.2015
3) Negative 0.0313
```
|
834 | carlosaguayo/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9299984897610097
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1689
- Accuracy: 0.9295
- F1: 0.9300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2853 | 1.0 | 250 | 0.1975 | 0.9235 | 0.9233 |
| 0.1568 | 2.0 | 500 | 0.1689 | 0.9295 | 0.9300 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
836 | celine/emotion-detection_indobenchmark-indobert-lite-base-p1 | [
"anger",
"fear",
"joy",
"sadness"
] | |
837 | celine/hate-speech_indobenchmark-indobert-lite-base-p1 | [
"hs",
"non_hs"
] | |
838 | celtics1863/env-bert-cls-chinese | [
"环境影响评价与管理",
"碳排放控制",
"水污染与控制",
"大气污染与控制",
"土壤污染与控制",
"环境生态",
"固体废物",
"环境毒理与健康",
"环境微生物",
"环境政策与经济"
] | ---
language:
- zh
tags:
- bert
- pytorch
- environment
- multi-class
- classification
---
中文环境文本分类模型,1.6M的数据集,在env-bert-chinese上进行fine-tuning。
分为环境影响评价与控制、碳排放控制、水污染控制、大气污染控制、土壤污染控制、环境生态、固体废物、环境毒理与健康、环境微生物、环境政策与经济10类。
项目正在进行中,后续会陆续更新相关内容。
清华大学环境学院课题组
有相关需求、建议,联系bi.huaibin@foxmail.com |
839 | celtics1863/env-bert-topic | [
"生态环境",
"水污染",
"野生动物保护",
"太阳能",
"环保经济",
"污水处理",
"绿色建筑",
"水处理",
"噪音污染",
"温室效应",
"净水设备",
"净水器",
"自来水",
"生活",
"环境评估",
"空气污染",
"环境评价",
"工业污染",
"雾霾",
"植树",
"环保行业",
"水处理工程",
"沙漠治理",
"巴黎协定",
"核能",
"噪音",
"环评工程师",
"二氧化碳",
"低碳",
"自然环境",
"沙尘暴",
"环境工程",
"秸秆焚烧",
"PM 2.5",
"太空垃圾",
"穹顶之下(纪录片)",
"垃圾",
"环境科学",
"净水",
"污水排放",
"室内空气污染",
"环境污染",
"全球变暖",
"邻居噪音",
"土壤污染",
"生物多样性",
"碳交易",
"污染治理",
"雾霾治理",
"碳金融",
"建筑节能",
"风能及风力发电",
"温室气体",
"环境保护",
"碳排放",
"垃圾处理器",
"气候变化",
"化学污染",
"地球一小时",
"环保组织",
"物种多样性",
"节能减排",
"核污染",
"环保督查",
"垃圾处理",
"垃圾分类",
"重金属污染",
"环境伦理学",
"垃圾焚烧"
] | ---
language: zh
widget:
- text: "美国退出《巴黎协定》"
- text: "污水处理厂中的功耗需要减少"
tags:
- pretrain
- pytorch
- environment
- classification
- topic classification
---
话题分类模型,使用某乎"环境"话题下所有子话题,过滤后得69类。
top1 acc 60.7,
top3 acc 81.6,
可以用于中文环境文本挖掘的预处理步骤。
标签:
"生态环境","水污染", "野生动物保护", "太阳能", "环保经济", "污水处理", "绿色建筑", "水处理", "噪音污染", "温室效应", "净水设备",
"净水器", "自来水", "生活", "环境评估", "空气污染", "环境评价", "工业污染", "雾霾", "植树", "环保行业", "水处理工程", "沙漠治理",
"巴黎协定", "核能", "噪音", "环评工程师", "二氧化碳", "低碳", "自然环境", "沙尘暴", "环境工程", "秸秆焚烧", "PM 2.5", "太空垃圾",
"穹顶之下(纪录片)", "垃圾", "环境科学", "净水", "污水排放", "室内空气污染", "环境污染", "全球变暖", "邻居噪音", "土壤污染", "生物多样性",
"碳交易", "污染治理", "雾霾治理", "碳金融", "建筑节能", "风能及风力发电", "温室气体", "环境保护", "碳排放", "垃圾处理器", "气候变化", "化学污染",
"地球一小时", "环保组织", "物种多样性", "节能减排", "核污染", "环保督查", "垃圾处理", "垃圾分类", "重金属污染", "环境伦理学", "垃圾焚烧" |
840 | chisadi/nice-distilbert-v2 | [
"NICE_1",
"NICE_10",
"NICE_11",
"NICE_12",
"NICE_13",
"NICE_14",
"NICE_15",
"NICE_16",
"NICE_17",
"NICE_18",
"NICE_19",
"NICE_2",
"NICE_20",
"NICE_21",
"NICE_22",
"NICE_23",
"NICE_24",
"NICE_25",
"NICE_26",
"NICE_27",
"NICE_28",
"NICE_29",
"NICE_3",
"NICE_30",
"NICE_31",
"NICE_32",
"NICE_33",
"NICE_34",
"NICE_35",
"NICE_36",
"NICE_37",
"NICE_38",
"NICE_39",
"NICE_4",
"NICE_40",
"NICE_41",
"NICE_42",
"NICE_43",
"NICE_44",
"NICE_45",
"NICE_5",
"NICE_6",
"NICE_7",
"NICE_8",
"NICE_9"
] | ### Distibert model finetuned on the task of classifying product descriptions to one of 45 broad [NICE classifications](https://www.wipo.int/classifications/nice/en/)
|
843 | chkla/roberta-argument | [
"NON-ARGUMENT",
"ARGUMENT"
] | ---
language: en
widget:
- text: "It has been determined that the amount of greenhouse gases have decreased by almost half because of the prevalence in the utilization of nuclear power."
---
### Welcome to RoBERTArg!
🤖 **Model description**
This model was trained on ~25k heterogeneous manually annotated sentences (📚 [Stab et al. 2018](https://www.aclweb.org/anthology/D18-1402/)) of controversial topics to classify text into one of two labels: 🏷 **NON-ARGUMENT** (0) and **ARGUMENT** (1).
🗃 **Dataset**
The dataset (📚 Stab et al. 2018) consists of **ARGUMENTS** (\~11k) that either support or oppose a topic if it includes a relevant reason for supporting or opposing the topic, or as a **NON-ARGUMENT** (\~14k) if it does not include reasons. The authors focus on controversial topics, i.e., topics that include "an obvious polarity to the possible outcomes" and compile a final set of eight controversial topics: _abortion, school uniforms, death penalty, marijuana legalization, nuclear energy, cloning, gun control, and minimum wage_.
| TOPIC | ARGUMENT | NON-ARGUMENT |
|----|----|----|
| abortion | 2213 | 2,427 |
| school uniforms | 325 | 1,734 |
| death penalty | 325 | 2,083 |
| marijuana legalization | 325 | 1,262 |
| nuclear energy | 325 | 2,118 |
| cloning | 325 | 1,494 |
| gun control | 325 | 1,889 |
| minimum wage | 325 | 1,346 |
🏃🏼♂️**Model training**
**RoBERTArg** was fine-tuned on a RoBERTA (base) pre-trained model from HuggingFace using the HuggingFace trainer with the following hyperparameters:
```
training_args = TrainingArguments(
num_train_epochs=2,
learning_rate=2.3102e-06,
seed=8,
per_device_train_batch_size=64,
per_device_eval_batch_size=64,
)
```
📊 **Evaluation**
The model was evaluated on an evaluation set (20%):
| Model | Acc | F1 | R arg | R non | P arg | P non |
|----|----|----|----|----|----|----|
| RoBERTArg | 0.8193 | 0.8021 | 0.8463 | 0.7986 | 0.7623 | 0.8719 |
Showing the **confusion matrix** using again the evaluation set:
| | ARGUMENT | NON-ARGUMENT |
|----|----|----|
| ARGUMENT | 2213 | 558 |
| NON-ARGUMENT | 325 | 1790 |
⚠️ **Intended Uses & Potential Limitations**
The model can only be a starting point to dive into the exciting field of argument mining. But be aware. An argument is a complex structure, with multiple dependencies. Therefore, the model may perform less well on different topics and text types not included in the training set.
Enjoy and stay tuned! 🚀
🐦 Twitter: [@chklamm](http://twitter.com/chklamm) |
844 | chrommium/bert-base-multilingual-cased-finetuned-news-headlines | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model_index:
- name: bert-base-multilingual-cased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
metric:
name: Accuracy
type: accuracy
value: 0.9755
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-cola
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1729
- Accuracy: 0.9755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5119 | 1.0 | 625 | 0.2386 | 0.922 |
| 0.2536 | 2.0 | 1250 | 0.2055 | 0.949 |
| 0.1718 | 3.0 | 1875 | 0.1733 | 0.969 |
| 0.0562 | 4.0 | 2500 | 0.1661 | 0.974 |
| 0.0265 | 5.0 | 3125 | 0.1729 | 0.9755 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
845 | chrommium/rubert-base-cased-sentence-finetuned-headlines_X | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rubert-base-cased-sentence-finetuned-headlines_X
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.952
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-headlines_X
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2535
- Accuracy: 0.952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 157 | 0.2759 | 0.912 |
| No log | 2.0 | 314 | 0.2538 | 0.936 |
| No log | 3.0 | 471 | 0.2556 | 0.945 |
| 0.1908 | 4.0 | 628 | 0.2601 | 0.95 |
| 0.1908 | 5.0 | 785 | 0.2535 | 0.952 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
846 | chrommium/rubert-base-cased-sentence-finetuned-sent_in_news_sents | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-base-cased-sentence-finetuned-sent_in_news_sents
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7224199288256228
- name: F1
type: f1
value: 0.5137303178348194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-sent_in_news_sents
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9506
- Accuracy: 0.7224
- F1: 0.5137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 1.0045 | 0.6690 | 0.1388 |
| No log | 2.0 | 162 | 0.9574 | 0.6228 | 0.2980 |
| No log | 3.0 | 243 | 1.0259 | 0.6477 | 0.3208 |
| No log | 4.0 | 324 | 1.1262 | 0.6619 | 0.4033 |
| No log | 5.0 | 405 | 1.3377 | 0.6299 | 0.3909 |
| No log | 6.0 | 486 | 1.5716 | 0.6868 | 0.3624 |
| 0.6085 | 7.0 | 567 | 1.6286 | 0.6762 | 0.4130 |
| 0.6085 | 8.0 | 648 | 1.6450 | 0.6940 | 0.4775 |
| 0.6085 | 9.0 | 729 | 1.7108 | 0.7224 | 0.4920 |
| 0.6085 | 10.0 | 810 | 1.8792 | 0.7046 | 0.5028 |
| 0.6085 | 11.0 | 891 | 1.8670 | 0.7153 | 0.4992 |
| 0.6085 | 12.0 | 972 | 1.8856 | 0.7153 | 0.4934 |
| 0.0922 | 13.0 | 1053 | 1.9506 | 0.7224 | 0.5137 |
| 0.0922 | 14.0 | 1134 | 2.0363 | 0.7189 | 0.4761 |
| 0.0922 | 15.0 | 1215 | 2.0601 | 0.7224 | 0.5053 |
| 0.0922 | 16.0 | 1296 | 2.0813 | 0.7153 | 0.5038 |
| 0.0922 | 17.0 | 1377 | 2.0960 | 0.7189 | 0.5065 |
| 0.0922 | 18.0 | 1458 | 2.1060 | 0.7224 | 0.5098 |
| 0.0101 | 19.0 | 1539 | 2.1153 | 0.7260 | 0.5086 |
| 0.0101 | 20.0 | 1620 | 2.1187 | 0.7260 | 0.5086 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
847 | chrommium/rubert-base-cased-sentence-finetuned-sent_in_ru | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-base-cased-sentence-finetuned-sent_in_ru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-sent_in_ru
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3503
- Accuracy: 0.6884
- F1: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 15
- eval_batch_size: 15
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 441 | 0.7397 | 0.6630 | 0.6530 |
| 0.771 | 2.0 | 882 | 0.7143 | 0.6909 | 0.6905 |
| 0.5449 | 3.0 | 1323 | 0.8385 | 0.6897 | 0.6870 |
| 0.3795 | 4.0 | 1764 | 0.8851 | 0.6939 | 0.6914 |
| 0.3059 | 5.0 | 2205 | 1.0728 | 0.6933 | 0.6953 |
| 0.2673 | 6.0 | 2646 | 1.0673 | 0.7060 | 0.7020 |
| 0.2358 | 7.0 | 3087 | 1.5200 | 0.6830 | 0.6829 |
| 0.2069 | 8.0 | 3528 | 1.3439 | 0.7024 | 0.7016 |
| 0.2069 | 9.0 | 3969 | 1.3545 | 0.6830 | 0.6833 |
| 0.1724 | 10.0 | 4410 | 1.5591 | 0.6927 | 0.6902 |
| 0.1525 | 11.0 | 4851 | 1.6425 | 0.6818 | 0.6823 |
| 0.131 | 12.0 | 5292 | 1.8999 | 0.6836 | 0.6775 |
| 0.1253 | 13.0 | 5733 | 1.6959 | 0.6884 | 0.6877 |
| 0.1132 | 14.0 | 6174 | 1.9561 | 0.6776 | 0.6803 |
| 0.0951 | 15.0 | 6615 | 2.0356 | 0.6763 | 0.6754 |
| 0.1009 | 16.0 | 7056 | 1.7995 | 0.6842 | 0.6741 |
| 0.1009 | 17.0 | 7497 | 2.0638 | 0.6884 | 0.6811 |
| 0.0817 | 18.0 | 7938 | 2.1686 | 0.6884 | 0.6859 |
| 0.0691 | 19.0 | 8379 | 2.0874 | 0.6878 | 0.6889 |
| 0.0656 | 20.0 | 8820 | 2.1772 | 0.6854 | 0.6817 |
| 0.0652 | 21.0 | 9261 | 2.4018 | 0.6872 | 0.6896 |
| 0.0608 | 22.0 | 9702 | 2.2074 | 0.6770 | 0.6656 |
| 0.0677 | 23.0 | 10143 | 2.2101 | 0.6848 | 0.6793 |
| 0.0559 | 24.0 | 10584 | 2.2920 | 0.6848 | 0.6835 |
| 0.0524 | 25.0 | 11025 | 2.3503 | 0.6884 | 0.6875 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
848 | chrommium/sbert_large-finetuned-sent_in_news_sents | [
"LABEL_-3",
"LABEL_-2",
"LABEL_-1",
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sbert_large-finetuned-sent_in_news_sents
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7056
- Accuracy: 0.7301
- F1: 0.5210
## Model examples
Model responds to label X in news text. For exaple:
For 'Газпром отозвал лицензию у X, сообщает Финам' the model will return negative label -3
For 'X отозвал лицензию у Сбербанка, сообщает Финам' the model will return neutral label 0
For 'Газпром отозвал лицензию у Сбербанка, сообщает X' the model will return neutral label 0
For 'X демонстрирует высокую прибыль, сообщает Финам' the model will return positive label 1
## Simple example of News preprocessing for Russian before BERT
```
from natasha import (
Segmenter,
MorphVocab,
NewsEmbedding,
NewsMorphTagger,
NewsSyntaxParser,
NewsNERTagger,
PER,
NamesExtractor,
Doc
)
segmenter = Segmenter()
emb = NewsEmbedding()
morph_tagger = NewsMorphTagger(emb)
syntax_parser = NewsSyntaxParser(emb)
morph_vocab = MorphVocab()
### ----------------------------- key sentences block -----------------------------
def find_synax_tokens_with_order(doc, start, tokens, text_arr, full_str):
''' Находит все синтаксические токены, соответствующие заданному набору простых токенов (найденные
для определенной NER другими функциями).
Возвращает словарь найденных синтаксических токенов (ключ - идентификатор токена, состоящий
из номера предложения и номера токена внутри предложения).
Начинает поиск с указанной позиции в списке синтаксических токенов, дополнительно возвращает
позицию остановки, с которой нужно продолжить поиск следующей NER.
'''
found = []
in_str = False
str_candidate = ''
str_counter = 0
if len(text_arr) == 0:
return [], start
for i in range(start, len(doc.syntax.tokens)):
t = doc.syntax.tokens[i]
if in_str:
str_counter += 1
if str_counter < len(text_arr) and t.text == text_arr[str_counter]:
str_candidate += t.text
found.append(t)
if str_candidate == full_str:
return found, i+1
else:
in_str = False
str_candidate = ''
str_counter = 0
found = []
if t.text == text_arr[0]:
found.append(t)
str_candidate = t.text
if str_candidate == full_str:
return found, i+1
in_str = True
return [], len(doc.syntax.tokens)
def find_tokens_in_diap_with_order(doc, start_token, diap):
''' Находит все простые токены (без синтаксической информации), которые попадают в
указанный диапазон. Эти диапазоны мы получаем из разметки NER.
Возвращает набор найденных токенов и в виде массива токенов, и в виде массива строчек.
Начинает поиск с указанной позиции в строке и дополнительно возвращает позицию остановки.
'''
found_tokens = []
found_text = []
full_str = ''
next_i = 0
for i in range(start_token, len(doc.tokens)):
t = doc.tokens[i]
if t.start > diap[-1]:
next_i = i
break
if t.start in diap:
found_tokens.append(t)
found_text.append(t.text)
full_str += t.text
return found_tokens, found_text, full_str, next_i
def add_found_arr_to_dict(found, dict_dest):
for synt in found:
dict_dest.update({synt.id: synt})
return dict_dest
def make_all_syntax_dict(doc):
all_syntax = {}
for synt in doc.syntax.tokens:
all_syntax.update({synt.id: synt})
return all_syntax
def is_consiquent(id_1, id_2):
''' Проверяет идут ли токены друг за другом без промежутка по ключам. '''
id_1_list = id_1.split('_')
id_2_list = id_2.split('_')
if id_1_list[0] != id_2_list[0]:
return False
return int(id_1_list[1]) + 1 == int(id_2_list[1])
def replace_found_to(found, x_str):
''' Заменяет последовательность токенов NER на «заглушку». '''
prev_id = '0_0'
for synt in found:
if is_consiquent(prev_id, synt.id):
synt.text = ''
else:
synt.text = x_str
prev_id = synt.id
def analyze_doc(text):
''' Запускает Natasha для анализа документа. '''
doc = Doc(text)
doc.segment(segmenter)
doc.tag_morph(morph_tagger)
doc.parse_syntax(syntax_parser)
ner_tagger = NewsNERTagger(emb)
doc.tag_ner(ner_tagger)
return doc
def find_non_sym_syntax_short(entity_name, doc, add_X=False, x_str='X'):
''' Отыскивает заданную сущность в тексте, среди всех NER (возможно, в другой грамматической форме).
entity_name - сущность, которую ищем;
doc - документ, в котором сделан препроцессинг Natasha;
add_X - сделать ли замену сущности на «заглушку»;
x_str - текст замены.
Возвращает:
all_found_syntax - словарь всех подходящих токенов образующих искомые сущности, в котором
в случае надобности произведена замена NER на «заглушку»;
all_syntax - словарь всех токенов.
'''
all_found_syntax = {}
current_synt_number = 0
current_tok_number = 0
# идем по всем найденным NER
for span in doc.spans:
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
diap = range(span.start, span.stop)
# создаем словарь всех синтаксических элементов (ключ -- id из номера предложения и номера внутри предложения)
all_syntax = make_all_syntax_dict(doc)
# находим все простые токены внутри NER
found_tokens, found_text, full_str, current_tok_number = find_tokens_in_diap_with_order(doc, current_tok_number,
diap)
# по найденным простым токенам находим все синтаксические токены внутри данного NER
found, current_synt_number = find_synax_tokens_with_order(doc, current_synt_number, found_tokens, found_text,
full_str)
# если текст NER совпадает с указанной сущностью, то делаем замену
if entity_name.find(span.normal) >= 0 or span.normal.find(entity_name) >= 0:
if add_X:
replace_found_to(found, x_str)
all_found_syntax = add_found_arr_to_dict(found, all_found_syntax)
return all_found_syntax, all_syntax
def key_sentences(all_found_syntax):
''' Находит номера предложений с искомой NER. '''
key_sent_numb = {}
for synt in all_found_syntax.keys():
key_sent_numb.update({synt.split('_')[0]: 1})
return key_sent_numb
def openinig_punct(x):
opennings = ['«', '(']
return x in opennings
def key_sentences_str(entitiy_name, doc, add_X=False, x_str='X', return_all=True):
''' Составляет окончательный текст, в котором есть только предложения, где есть ключевая сущность,
эта сущность, если указано, заменяется на «заглушку».
'''
all_found_syntax, all_syntax = find_non_sym_syntax_short(entitiy_name, doc, add_X, x_str)
key_sent_numb = key_sentences(all_found_syntax)
str_ret = ''
for s in all_syntax.keys():
if (s.split('_')[0] in key_sent_numb.keys()) or (return_all):
to_add = all_syntax[s]
if s in all_found_syntax.keys():
to_add = all_found_syntax[s]
else:
if to_add.rel == 'punct' and not openinig_punct(to_add.text):
str_ret = str_ret.rstrip()
str_ret += to_add.text
if (not openinig_punct(to_add.text)) and (to_add.text != ''):
str_ret += ' '
return str_ret
### ----------------------------- key entities block -----------------------------
def find_synt(doc, synt_id):
for synt in doc.syntax.tokens:
if synt.id == synt_id:
return synt
return None
def is_subj(doc, synt, recursion_list=[]):
''' Сообщает является ли слово подлежащим или частью сложного подлежащего. '''
if synt.rel == 'nsubj':
return True
if synt.rel == 'appos':
found_head = find_synt(doc, synt.head_id)
if found_head.id in recursion_list:
return False
return is_subj(doc, found_head, recursion_list + [synt.id])
return False
def find_subjects_in_syntax(doc):
''' Выдает словарик, в котором для каждой NER написано, является ли он
подлежащим в предложении.
Выдает стартовую позицию NER и было ли оно подлежащим (или appos)
'''
found_subjects = {}
current_synt_number = 0
current_tok_number = 0
for span in doc.spans:
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
found_subjects.update({span.start: 0})
diap = range(span.start, span.stop)
found_tokens, found_text, full_str, current_tok_number = find_tokens_in_diap_with_order(doc,
current_tok_number,
diap)
found, current_synt_number = find_synax_tokens_with_order(doc, current_synt_number, found_tokens,
found_text, full_str)
found_subjects.update({span.start: 0})
for synt in found:
if is_subj(doc, synt):
found_subjects.update({span.start: 1})
return found_subjects
def entity_weight(lst, c=1):
return c*lst[0]+lst[1]
def determine_subject(found_subjects, doc, new_agency_list, return_best=True, threshold=0.75):
''' Определяет ключевую NER и список самых важных NER, основываясь на том, сколько
раз каждая из них встречается в текста вообще и сколько раз в роли подлежащего '''
objects_arr = []
objects_arr_ners = []
should_continue = False
for span in doc.spans:
should_continue = False
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
if span.normal in new_agency_list:
continue
for i in range(len(objects_arr)):
t, lst = objects_arr[i]
if t.find(span.normal) >= 0:
lst[0] += 1
lst[1] += found_subjects[span.start]
should_continue = True
break
if span.normal.find(t) >= 0:
objects_arr[i] = (span.normal, [lst[0]+1, lst[1]+found_subjects[span.start]])
should_continue = True
break
if should_continue:
continue
objects_arr.append((span.normal, [1, found_subjects[span.start]]))
objects_arr_ners.append(span.normal)
max_weight = 0
opt_ent = 0
for obj in objects_arr:
t, lst = obj
w = entity_weight(lst)
if max_weight < w:
max_weight = w
opt_ent = t
if not return_best:
return opt_ent, objects_arr_ners
bests = []
for obj in objects_arr:
t, lst = obj
w = entity_weight(lst)
if max_weight*threshold < w:
bests.append(t)
return opt_ent, bests
text = '''В офисах Сбера начали тестировать технологию помощи посетителям в экстренных ситуациях. «Зеленая кнопка» будет
в зонах круглосуточного обслуживания офисов банка в Воронеже, Санкт-Петербурге, Подольске, Пскове, Орле и Ярославле.
В них находятся стенды с сенсорными кнопками, обеспечивающие связь с операторами центра мониторинга службы безопасности
банка. Получив сигнал о помощи, оператор центра может подключиться к объекту по голосовой связи. С помощью камер
видеонаблюдения он оценит обстановку и при необходимости вызовет полицию или скорую помощь. «Зеленой кнопкой» можно
воспользоваться в нерабочее для отделения время, если возникла угроза жизни или здоровью. В остальных случаях помочь
клиентам готовы сотрудники отделения банка. «Одно из направлений нашей работы в области ESG и устойчивого развития
— это забота об обществе. И здоровье людей как высшая ценность является его основой. Поэтому задача банка в области
безопасности гораздо масштабнее, чем обеспечение только финансовой безопасности клиентов. Этот пилотный проект
приурочен к 180-летию Сбербанка: мы хотим, чтобы, приходя в банк, клиент чувствовал, что его жизнь и безопасность
— наша ценность», — отметил заместитель председателя правления Сбербанка Станислав Кузнецов.'''
doc = analyze_doc(text)
key_entity = determine_subject(find_subjects_in_syntax(doc), doc, [])[0]
text_for_model = key_sentences_str(key_entity, doc, add_X=True, x_str='X', return_all=False)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 176 | 0.9504 | 0.6903 | 0.2215 |
| No log | 2.0 | 352 | 0.9065 | 0.7159 | 0.4760 |
| 0.8448 | 3.0 | 528 | 0.9687 | 0.7045 | 0.4774 |
| 0.8448 | 4.0 | 704 | 1.2436 | 0.7045 | 0.4686 |
| 0.8448 | 5.0 | 880 | 1.4809 | 0.7273 | 0.4630 |
| 0.2074 | 6.0 | 1056 | 1.5866 | 0.7330 | 0.5185 |
| 0.2074 | 7.0 | 1232 | 1.7056 | 0.7301 | 0.5210 |
| 0.2074 | 8.0 | 1408 | 1.6982 | 0.7415 | 0.5056 |
| 0.0514 | 9.0 | 1584 | 1.8088 | 0.7273 | 0.5203 |
| 0.0514 | 10.0 | 1760 | 1.9250 | 0.7102 | 0.4879 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
849 | chrommium/sbert_large-finetuned-sent_in_news_sents_3lab | [
"LABEL_-1",
"LABEL_0",
"LABEL_1"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sbert_large-finetuned-sent_in_news_sents_3lab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents_3lab
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9443
- Accuracy: 0.8580
- F1: 0.6199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 264 | 0.6137 | 0.8608 | 0.3084 |
| 0.524 | 2.0 | 528 | 0.6563 | 0.8722 | 0.4861 |
| 0.524 | 3.0 | 792 | 0.7110 | 0.8494 | 0.4687 |
| 0.2225 | 4.0 | 1056 | 0.7323 | 0.8608 | 0.6015 |
| 0.2225 | 5.0 | 1320 | 0.9604 | 0.8551 | 0.6185 |
| 0.1037 | 6.0 | 1584 | 0.8801 | 0.8523 | 0.5535 |
| 0.1037 | 7.0 | 1848 | 0.9443 | 0.8580 | 0.6199 |
| 0.0479 | 8.0 | 2112 | 1.0048 | 0.8608 | 0.6168 |
| 0.0479 | 9.0 | 2376 | 0.9757 | 0.8551 | 0.6097 |
| 0.0353 | 10.0 | 2640 | 1.0743 | 0.8580 | 0.6071 |
| 0.0353 | 11.0 | 2904 | 1.1216 | 0.8580 | 0.6011 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
850 | chrommium/xlm-roberta-large-finetuned-sent_in_news | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-large-finetuned-sent_in_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-sent_in_news
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8872
- Accuracy: 0.7273
- F1: 0.5125
## Model description
Модель ассиметрична, реагирует на метку X в тексте новости.
Попробуйте следующие примеры:
a) Агентство X понизило рейтинг банка Fitch.
b) Агентство Fitch понизило рейтинг банка X.
a) Компания Финам показала рекордную прибыль, говорят аналитики компании X.
b) Компания X показала рекордную прибыль, говорят аналитики компании Финам.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 106 | 1.2526 | 0.6108 | 0.1508 |
| No log | 2.0 | 212 | 1.1553 | 0.6648 | 0.1141 |
| No log | 3.0 | 318 | 1.1150 | 0.6591 | 0.1247 |
| No log | 4.0 | 424 | 1.0007 | 0.6705 | 0.1383 |
| 1.1323 | 5.0 | 530 | 0.9267 | 0.6733 | 0.2027 |
| 1.1323 | 6.0 | 636 | 1.0869 | 0.6335 | 0.4084 |
| 1.1323 | 7.0 | 742 | 1.1224 | 0.6932 | 0.4586 |
| 1.1323 | 8.0 | 848 | 1.2535 | 0.6307 | 0.3424 |
| 1.1323 | 9.0 | 954 | 1.4288 | 0.6932 | 0.4881 |
| 0.5252 | 10.0 | 1060 | 1.5856 | 0.6932 | 0.4739 |
| 0.5252 | 11.0 | 1166 | 1.7101 | 0.6733 | 0.4530 |
| 0.5252 | 12.0 | 1272 | 1.7330 | 0.6903 | 0.4750 |
| 0.5252 | 13.0 | 1378 | 1.8872 | 0.7273 | 0.5125 |
| 0.5252 | 14.0 | 1484 | 1.8797 | 0.7301 | 0.5033 |
| 0.1252 | 15.0 | 1590 | 1.9339 | 0.7330 | 0.5024 |
| 0.1252 | 16.0 | 1696 | 1.9632 | 0.7301 | 0.4967 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
855 | clem/autonlp-test3-2101779 | [
"not_urgent",
"urgent"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- clem/autonlp-data-test3
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2101779
## Validation Metrics
- Loss: 0.282466858625412
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/clem/autonlp-test3-2101779
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("clem/autonlp-test3-2101779", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("clem/autonlp-test3-2101779", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
856 | clem/autonlp-test3-2101782 | [
"not_urgent",
"urgent"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- clem/autonlp-data-test3
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2101782
## Validation Metrics
- Loss: 0.015991805121302605
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/clem/autonlp-test3-2101782
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("clem/autonlp-test3-2101782", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("clem/autonlp-test3-2101782", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
857 | clem/autonlp-test3-2101787 | [
"not_urgent",
"urgent"
] | ---
tags: autonlp
language: en
widget:
- text: "this can wait"
datasets:
- clem/autonlp-data-test3
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification Urgent/Not Urgent
## Validation Metrics
- Loss: 0.08956164121627808
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/clem/autonlp-test3-2101787
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("clem/autonlp-test3-2101787", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("clem/autonlp-test3-2101787", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
858 | climatebert/distilroberta-base-climate-commitment | [
"no",
"yes"
] | ---
license: apache-2.0
datasets:
- climatebert/climate_commitments_actions
language:
- en
metrics:
- accuracy
---
# Model Card for distilroberta-base-climate-commitment
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into paragraphs being about climate commitments and actions and paragraphs not being about climate commitments and actions.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-commitment model is fine-tuned on our [climatebert/climate_commitments_actions](https://huggingface.co/climatebert/climate_commitments_actions) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_commitments_actions"
model_name = "climatebert/distilroberta-base-climate-commitment"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
``` |
859 | climatebert/distilroberta-base-climate-detector | [
"no",
"yes"
] | ---
license: apache-2.0
datasets:
- climatebert/climate_detection
language:
- en
metrics:
- accuracy
---
# Model Card for distilroberta-base-climate-detector
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for detecting climate-related paragraphs.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-detector model is fine-tuned on our [climatebert/climate_detection](https://huggingface.co/climatebert/climate_detection) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_detection"
model_name = "climatebert/distilroberta-base-climate-detector"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
``` |
860 | climatebert/distilroberta-base-climate-sentiment | [
"neutral",
"opportunity",
"risk"
] | ---
license: apache-2.0
datasets:
- climatebert/climate_sentiment
language:
- en
metrics:
- accuracy
---
# Model Card for distilroberta-base-climate-sentiment
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the climate-related sentiment classes opportunity, neutral, or risk.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-sentiment model is fine-tuned on our [climatebert/climate_sentiment](https://huggingface.co/climatebert/climate_sentiment) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_sentiment"
model_name = "climatebert/distilroberta-base-climate-sentiment"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
``` |
861 | climatebert/distilroberta-base-climate-specificity | [
"non",
"spec"
] | ---
license: apache-2.0
datasets:
- climatebert/climate_specificity
language:
- en
metrics:
- accuracy
tags:
- climate
---
# Model Card for distilroberta-base-climate-specificity
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into specific and non-specific paragraphs.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-specificity model is fine-tuned on our [climatebert/climate_specificity](https://huggingface.co/climatebert/climate_specificity) dataset.
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/climate_specificity"
model_name = "climatebert/distilroberta-base-climate-specificity"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
``` |
862 | climatebert/distilroberta-base-climate-tcfd | [
"governance",
"metrics",
"risk",
"strategy"
] | ---
license: apache-2.0
datasets:
- climatebert/tcfd_recommendations
language:
- en
metrics:
- accuracy
tags:
- climate
---
# Model Card for distilroberta-base-climate-tcfd
## Model Description
This is the fine-tuned ClimateBERT language model with a classification head for classifying climate-related paragraphs into the four TCFD recommendation categories ([fsb-tcfd.org](https://www.fsb-tcfd.org)).
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-tcfd model is fine-tuned on our [climatebert/tcfd_recommendations](https://huggingface.co/climatebert/tcfd_recommendations) dataset using only the four recommendation categories (i.e., we remove the non-climate-related class from the dataset).
*Note: This model is trained on paragraphs. It may not perform well on sentences.*
## Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
## How to Get Started With the Model
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
from tqdm.auto import tqdm
dataset_name = "climatebert/tcfd_recommendations"
model_name = "climatebert/distilroberta-base-climate-tcfd"
# If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading
dataset = datasets.load_dataset(dataset_name, split="test")
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)):
print(out)
``` |
863 | cmarkea/distilcamembert-base-nli | [
"contradiction",
"entailment",
"neutral"
] | ---
language: fr
license: mit
tags:
- zero-shot-classification
- sentence-similarity
- nli
pipeline_tag: zero-shot-classification
widget:
- text: "Selon certains physiciens, un univers parallèle, miroir du nôtre ou relevant de ce que l'on appelle la théorie des branes, autoriserait des neutrons à sortir de notre Univers pour y entrer à nouveau. L'idée a été testée une nouvelle fois avec le réacteur nucléaire de l'Institut Laue-Langevin à Grenoble, plus précisément en utilisant le détecteur de l'expérience Stereo initialement conçu pour chasser des particules de matière noire potentielles, les neutrinos stériles."
candidate_labels: "politique, science, sport, santé"
hypothesis_template: "Ce texte parle de {}."
datasets:
- flue
---
DistilCamemBERT-NLI
===================
We present DistilCamemBERT-NLI, which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the Natural Language Inference (NLI) task for the french language, also known as recognizing textual entailment (RTE). This model is constructed on the XNLI dataset, which determines whether a premise entails, contradicts or neither entails or contradicts a hypothesis.
This modelization is close to [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue especially in the context of cross-encoding like this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power, thanks to DistilCamemBERT.
Dataset
-------
The dataset XNLI from [FLUE](https://huggingface.co/datasets/flue) comprises 392,702 premises with their hypothesis for the train and 5,010 couples for the test. The goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B?) and is a classification task (given two sentences, predict one of three labels). Sentence A is called *premise*, and sentence B is called *hypothesis*, then the goal of modelization is determined as follows:
$$P(premise=c\in\{contradiction, entailment, neutral\}\vert hypothesis)$$
Evaluation results
------------------
| **class** | **precision (%)** | **f1-score (%)** | **support** |
| :----------------: | :---------------: | :--------------: | :---------: |
| **global** | 77.70 | 77.45 | 5,010 |
| **contradiction** | 78.00 | 79.54 | 1,670 |
| **entailment** | 82.90 | 78.87 | 1,670 |
| **neutral** | 72.18 | 74.04 | 1,670 |
Benchmark
---------
We compare the [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) model to 2 other modelizations working on the french language. The first one [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) is based on well named [CamemBERT](https://huggingface.co/camembert-base), the french RoBERTa model and the second one [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) based on [mDeBERTav3](https://huggingface.co/microsoft/mdeberta-v3-base) a multilingual model. To compare the performances, the metrics of accuracy and [MCC (Matthews Correlation Coefficient)](https://en.wikipedia.org/wiki/Phi_coefficient) were used. We used an **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** for mean inference time measure.
| **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** |
| :--------------: | :-----------: | :--------------: | :------------: |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **51.35** | 77.45 | 66.24 |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 105.0 | 81.72 | 72.67 |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 299.18 | **83.43** | **75.15** |
Zero-shot classification
------------------------
The main advantage of such modelization is to create a zero-shot classifier allowing text classification without training. This task can be summarized by:
$$P(hypothesis=i\in\mathcal{C}|premise)=\frac{e^{P(premise=entailment\vert hypothesis=i)}}{\sum_{j\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis=j)}}$$
For this part, we use two datasets, the first one: [allocine](https://huggingface.co/datasets/allocine) used to train the sentiment analysis models. The dataset comprises two classes: "positif" and "négatif" appreciation of movie reviews. Here we use "Ce commentaire est {}." as the hypothesis template and "positif" and "négatif" as candidate labels.
| **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** |
| :--------------: | :-----------: | :--------------: | :------------: |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **195.54** | 80.59 | 63.71 |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 378.39 | **86.37** | **73.74** |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 520.58 | 84.97 | 70.05 |
The second one: [mlsum](https://huggingface.co/datasets/mlsum) used to train the summarization models. In this aim, we aggregate sub-topics and select a few of them. We use the articles summary part to predict their topics. In this case, the hypothesis template used is "C'est un article traitant de {}." and the candidate labels are: "économie", "politique", "sport" and "science".
| **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** |
| :--------------: | :-----------: | :--------------: | :------------: |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **217.77** | **79.30** | **70.55** |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 448.27 | 70.7 | 64.10 |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 591.34 | 64.45 | 58.67 |
How to use DistilCamemBERT-NLI
------------------------------
```python
from transformers import pipeline
classifier = pipeline(
task='zero-shot-classification',
model="cmarkea/distilcamembert-base-nli",
tokenizer="cmarkea/distilcamembert-base-nli"
)
result = classifier (
sequences="Le style très cinéphile de Quentin Tarantino "
"se reconnaît entre autres par sa narration postmoderne "
"et non linéaire, ses dialogues travaillés souvent "
"émaillés de références à la culture populaire, et ses "
"scènes hautement esthétiques mais d'une violence "
"extrême, inspirées de films d'exploitation, d'arts "
"martiaux ou de western spaghetti.",
candidate_labels="cinéma, technologie, littérature, politique",
hypothesis_template="Ce texte parle de {}."
)
result
{"labels": ["cinéma",
"littérature",
"technologie",
"politique"],
"scores": [0.7164115309715271,
0.12878799438476562,
0.1092301607131958,
0.0455702543258667]}
```
### Optimum + ONNX
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
HUB_MODEL = "cmarkea/distilcamembert-base-nli"
tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL)
model = ORTModelForSequenceClassification.from_pretrained(HUB_MODEL)
onnx_qa = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer)
# Quantized onnx model
quantized_model = ORTModelForSequenceClassification.from_pretrained(
HUB_MODEL, file_name="model_quantized.onnx"
)
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
``` |
864 | cmarkea/distilcamembert-base-sentiment | [
"1 star",
"2 stars",
"3 stars",
"4 stars",
"5 stars"
] | ---
language: fr
license: mit
datasets:
- amazon_reviews_multi
- allocine
widget:
- text: "Je pensais lire un livre nul, mais finalement je l'ai trouvé super !"
- text: "Cette banque est très bien, mais elle n'offre pas les services de paiements sans contact."
- text: "Cette banque est très bien et elle offre en plus les services de paiements sans contact."
---
DistilCamemBERT-Sentiment
=========================
We present DistilCamemBERT-Sentiment, which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the sentiment analysis task for the French language. This model is built using two datasets: [Amazon Reviews](https://huggingface.co/datasets/amazon_reviews_multi) and [Allociné.fr](https://huggingface.co/datasets/allocine) to minimize the bias. Indeed, Amazon reviews are similar in messages and relatively shorts, contrary to Allociné critics, who are long and rich texts.
This modelization is close to [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which **divides the inference time by two** with the same consumption power thanks to [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base).
Dataset
-------
The dataset comprises 204,993 reviews for training and 4,999 reviews for the test from Amazon, and 235,516 and 4,729 critics from [Allocine website](https://www.allocine.fr/). The dataset is labeled into five categories:
* 1 star: represents a terrible appreciation,
* 2 stars: bad appreciation,
* 3 stars: neutral appreciation,
* 4 stars: good appreciation,
* 5 stars: excellent appreciation.
Evaluation results
------------------
In addition of accuracy (called here *exact accuracy*) in order to be robust to +/-1 star estimation errors, we will take the following definition as a performance measure:
$$\mathrm{top\!-\!2\; acc}=\frac{1}{|\mathcal{O}|}\sum_{i\in\mathcal{O}}\sum_{0\leq l < 2}\mathbb{1}(\hat{f}_{i,l}=y_i)$$
where \\(\hat{f}_l\\) is the l-th largest predicted label, \\(y\\) the true label, \\(\mathcal{O}\\) is the test set of the observations and \\(\mathbb{1}\\) is the indicator function.
| **class** | **exact accuracy (%)** | **top-2 acc (%)** | **support** |
| :---------: | :--------------------: | :---------------: | :---------: |
| **global** | 61.01 | 88.80 | 9,698 |
| **1 star** | 87.21 | 77.17 | 1,905 |
| **2 stars** | 79.19 | 84.75 | 1,935 |
| **3 stars** | 77.85 | 78.98 | 1,974 |
| **4 stars** | 78.61 | 90.22 | 1,952 |
| **5 stars** | 85.96 | 82.92 | 1,932 |
Benchmark
---------
This model is compared to 3 reference models (see below). As each model doesn't have the exact definition of targets, we detail the performance measure used for each. An **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** was used for the mean inference time measure.
#### bert-base-multilingual-uncased-sentiment
[nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) is based on BERT model in the multilingual and uncased version. This sentiment analyzer is trained on Amazon reviews, similar to our model. Hence the targets and their definitions are the same.
| **model** | **time (ms)** | **exact accuracy (%)** | **top-2 acc (%)** |
| :-------: | :-----------: | :--------------------: | :---------------: |
| [cmarkea/distilcamembert-base-sentiment](https://huggingface.co/cmarkea/distilcamembert-base-sentiment) | **95.56** | **61.01** | **88.80** |
| [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) | 187.70 | 54.41 | 82.82 |
#### tf-allociné and barthez-sentiment-classification
[tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) based on [CamemBERT](https://huggingface.co/camembert-base) model and [moussaKam/barthez-sentiment-classification](https://huggingface.co/moussaKam/barthez-sentiment-classification) based on [BARThez](https://huggingface.co/moussaKam/barthez) use the same bi-class definition between them. To bring this back to a two-class problem, we will only consider the *"1 star"* and *"2 stars"* labels for the *negative* sentiments and *"4 stars"* and *"5 stars"* for *positive* sentiments. We exclude the *"3 stars"* which can be interpreted as a *neutral* class. In this context, the problem of +/-1 star estimation errors disappears. Then we use only the classical accuracy definition.
| **model** | **time (ms)** | **exact accuracy (%)** |
| :-------: | :-----------: | :--------------------: |
| [cmarkea/distilcamembert-base-sentiment](https://huggingface.co/cmarkea/distilcamembert-base-sentiment) | **95.56** | **97.52** |
| [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) | 329.74 | 95.69 |
| [moussaKam/barthez-sentiment-classification](https://huggingface.co/moussaKam/barthez-sentiment-classification) | 197.95 | 94.29 |
How to use DistilCamemBERT-Sentiment
------------------------------------
```python
from transformers import pipeline
analyzer = pipeline(
task='text-classification',
model="cmarkea/distilcamembert-base-sentiment",
tokenizer="cmarkea/distilcamembert-base-sentiment"
)
result = analyzer(
"J'aime me promener en forêt même si ça me donne mal aux pieds.",
return_all_scores=True
)
result
[{'label': '1 star',
'score': 0.047529436647892},
{'label': '2 stars',
'score': 0.14150355756282806},
{'label': '3 stars',
'score': 0.3586442470550537},
{'label': '4 stars',
'score': 0.3181498646736145},
{'label': '5 stars',
'score': 0.13417290151119232}]
```
### Optimum + ONNX
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
HUB_MODEL = "cmarkea/distilcamembert-base-sentiment"
tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL)
model = ORTModelForSequenceClassification.from_pretrained(HUB_MODEL)
onnx_qa = pipeline("text-classification", model=model, tokenizer=tokenizer)
# Quantized onnx model
quantized_model = ORTModelForSequenceClassification.from_pretrained(
HUB_MODEL, file_name="model_quantized.onnx"
)
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
``` |
868 | cointegrated/rubert-base-cased-dp-paraphrase-detection | [
"entailment",
"not_entailment"
] | ---
language: ["ru"]
tags:
- sentence-similarity
- text-classification
datasets:
- merionum/ru_paraphraser
---
This is a version of paraphrase detector by DeepPavlov ([details in the documentation](http://docs.deeppavlov.ai/en/master/features/overview.html#ranking-model-docs)) ported to the `Transformers` format.
All credit goes to the authors of DeepPavlov.
The model has been trained on the dataset from http://paraphraser.ru/.
It classifies texts as paraphrases (class 1) or non-paraphrases (class 0).
```python
import torch
from transformers import AutoModelForSequenceClassification, BertTokenizer
model_name = 'cointegrated/rubert-base-cased-dp-paraphrase-detection'
model = AutoModelForSequenceClassification.from_pretrained(model_name).cuda()
tokenizer = BertTokenizer.from_pretrained(model_name)
def compare_texts(text1, text2):
batch = tokenizer(text1, text2, return_tensors='pt').to(model.device)
with torch.inference_mode():
proba = torch.softmax(model(**batch).logits, -1).cpu().numpy()
return proba[0] # p(non-paraphrase), p(paraphrase)
print(compare_texts('Сегодня на улице хорошая погода', 'Сегодня на улице отвратительная погода'))
# [0.7056226 0.2943774]
print(compare_texts('Сегодня на улице хорошая погода', 'Отличная погодка сегодня выдалась'))
# [0.16524374 0.8347562 ]
```
P.S. In the DeepPavlov repository, the tokenizer uses `max_seq_length=64`.
This model, however, uses `model_max_length=512`.
Therefore, the results on long texts may be inadequate. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.