modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
FinanceInc/finbert_fls | [
"Not FLS",
"Non-specific FLS",
"Specific FLS"
] | ---
language: "en"
tags:
- financial-text-analysis
- forward-looking-statement
widget:
- text: "We expect the age of our fleet to enhance availability and reliability due to reduced downtime for repairs. "
---
Forward-looking statements (FLS) inform investors of managers’ beliefs and opinions about firm's future events or results. Identifying forward-looking statements from corporate reports can assist investors in financial analysis. FinBERT-FLS is a FinBERT model fine-tuned on 3,500 manually annotated sentences from Management Discussion and Analysis section of annual reports of Russell 3000 firms.
**Input**: A financial text.
**Output**: Specific-FLS , Non-specific FLS, or Not-FLS.
# How to use
You can use this model with Transformers pipeline for forward-looking statement classification.
```python
# tested in transformers==4.18.0
from transformers import BertTokenizer, BertForSequenceClassification, pipeline
finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-fls',num_labels=3)
tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-fls')
nlp = pipeline("text-classification", model=finbert, tokenizer=tokenizer)
results = nlp('We expect the age of our fleet to enhance availability and reliability due to reduced downtime for repairs.')
print(results) # [{'label': 'Specific FLS', 'score': 0.77278733253479}]
```
Visit [FinBERT.AI](https://finbert.ai/) for more details on the recent development of FinBERT. | 1,472 |
Fujitsu/AugCode | null | ---
inference: false
license: mit
widget:
language:
- en
metrics:
- mrr
datasets:
- augmented_codesearchnet
---
# 🔥 Augmented Code Model 🔥
This is Augmented Code Model which is a fined-tune model of [CodeBERT](https://huggingface.co/microsoft/codebert-base) for processing of similarity between given docstring and code. This model is fined-model based on Augmented Code Corpus with ACS=4.
## How to use the model ?
Similar to other huggingface model, you may load the model as follows.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Fujitsu/AugCode")
model = AutoModelForSequenceClassification.from_pretrained("Fujitsu/AugCode")
```
Then you may use `model` to infer the similarity between a given docstring and code.
### Citation
```bibtex@misc{bahrami2021augcode,
title={AugmentedCode: Examining the Effects of Natural Language Resources in Code Retrieval Models},
author={Mehdi Bahrami, N. C. Shrikanth, Yuji Mizobuchi, Lei Liu, Masahiro Fukuyori, Wei-Peng Chen, Kazuki Munakata},
year={2021},
eprint={TBA},
archivePrefix={TBA},
primaryClass={cs.CL}
}
``` | 1,172 |
GD/cq-bert-model-repo | null | Entry not found | 15 |
Hate-speech-CNERG/dehatebert-mono-italian | [
"NON_HATE",
"HATE"
] | ---
language: it
license: apache-2.0
---
This model is used detecting **hatespeech** in **Italian language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.837288 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| 1,058 |
Mathking/bert-base-german-cased-gnad10 | [
"Web",
"Panorama",
"International",
"Wirtschaft",
"Sport",
"Inland",
"Etat",
"Wissenschaft",
"Kultur"
] | ---
language:
- de
datasets:
- gnad10
tags:
- text-classification
- german-news-classification
metrics:
- accuracy
- precision
- recall
- f1
---
# German BERT for News Classification
This a bert-base-german-cased model finetuned for text classification on german news articles
## Training data
Used the training set from the 10KGNAD dataset (gnad10 on HuggingFace Datasets). | 377 |
NDugar/3epoch-3large | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` | 4,788 |
Rifky/IndoBERT-FakeNews | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: IndoBERT-FakeNews
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndoBERT-FakeNews
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 222 | 0.2507 |
| No log | 2.0 | 444 | 0.3830 |
| 0.2755 | 3.0 | 666 | 0.5660 |
| 0.2755 | 4.0 | 888 | 0.5165 |
| 0.1311 | 5.0 | 1110 | 0.5573 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| 1,561 |
adresgezgini/Finetuned-SentiBERtr-Pos-Neg-Reviews | null | Entry not found | 15 |
blanchefort/rubert-base-cased-sentiment-mokoron | null | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuTweetCorp
---
# RuBERT for Sentiment Analysis of Tweets
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuTweetCorp](https://study.mokoron.com/).
## Labels
0: POSITIVE
1: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[RuTweetCorp](https://study.mokoron.com/)**
> Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора // Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.
| 1,359 |
boychaboy/MNLI_distilbert-base-uncased | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
cardiffnlp/twitter-roberta-base-stance-feminist | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | 0 | |
pablouribe/beto-copus-supercategories-overfitted | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
pparasurama/raceBERT-ethnicity | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
prajjwal1/roberta-large-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | If you use the model, please consider citing the paper
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Roberta-large trained on MNLI.
----------------------
| Task | Accuracy |
|---------|----------|
| MNLI | 90.15 |
| MNLI-mm | 90.02 |
You can also check out:
- `prajjwal1/roberta-base-mnli`
- `prajjwal1/roberta-large-mnli`
- `prajjwal1/albert-base-v2-mnli`
- `prajjwal1/albert-base-v1-mnli`
- `prajjwal1/albert-large-v2-mnli`
[@prajjwal_1](https://twitter.com/prajjwal_1)
| 869 |
clips/republic | [
"neg",
"neu",
"pos"
] | ---
pipeline_tag: text-classification
language:
- nl
tags:
- text classification
- sentiment analysis
- domain adaptation
widget:
- text: "De NMBS heeft recent de airconditioning in alle treinen vernieuwd."
example_title: "POS-NMBS"
- text: "De wegenwerken langs de E34 blijven al maanden aanhouden."
example_title: "NEG-AWV"
- text: "Natuur en Bos is erin geslaagd 100 hectaren bosgebied te beschermen."
example_title: "POS-ANB"
- text: "Het FWO financiert te weinig excellent onderzoek."
example_title: "NEG-FWO"
- text: "De Lijn is op zoek naar nieuwe buschauffeurs."
example_title: "NEU-De Lijn"
---
# RePublic
### Model description
RePublic (<u>re</u>putation analyzer for <u>public</u> service organizations) is a Dutch BERT model based on BERTje (De Vries, 2019). The model was designed to predict the sentiment in Dutch-language news article text about public agencies. RePublic was developed by CLiPS in collaboration with [Jan Boon](https://www.uantwerpen.be/en/staff/jan-boon/).
### How to use
The model can be loaded and used to make predictions as follows:
```
from transformers import pipeline
model_path = 'clips/republic'
pipe = pipeline(task="text-classification",
model=model_path, tokenizer=model_path)
text = … # load your text here
output = pipe(text)
prediction = output[0]['label'] # 0=”neutral”; 1=”positive”; 2=”negative”
```
### Training data and procedure
RePublic was domain-adapted on 91 661 Flemish news articles from three popular Flemish news providers between 2000 and 2020 (“Het Laatste Nieuws”, “Het Nieuwsblad” and “De Morgen”). These articles mention at least one out of a pre-defined list of 24 public service organizations, which contains, a.o., De Lijn (public transport organization), VDAB (Flemish job placement service), and Agentschap Zorg en Gezondheid (healthcare service). The domain adaptation was achieved by performing BERT’s language modeling tasks (masked language modeling & next sentence prediction).
The model was then fine-tuned on a sentiment classification task (“positive”, “negative”, “neutral”). The supervised data consisted of 4404 annotated sentences mentioning Flemish public agencies of which 1257 sentences were positive, 1485 sentences were negative and 1662 sentences were neutral. Fine-tuning was performed for 4 epochs using a batch size of 8 and a learning rate of 5e-5. In order to evaluate the model, a 10-fold cross validation experiment was conducted. The results of this experiment can be found below.
| **Class** | **Precision (%)** | **Recall (%)** | **F1-score (%)** |
|:---:|:---:|:---:|:---:|
| _Positive_ | 87.3 | 88.6 | 88.0 |
| _Negative_ | 86.4 | 86.5 | 86.5 |
| _Neutral_ | 85.3 | 84.2 | 84.7 |
| _Macro-averaged_ | 86.3 | 86.4 | 86.4 | | 2,766 |
Smith123/tiny-bert-sst2-distilled_L4_H_512 | [
"negative",
"positive"
] | Entry not found | 15 |
anahitapld/bert-base-cased-dbd | null | ---
license: apache-2.0
---
| 28 |
JHart96/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8627450980392156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3300
- Accuracy: 0.86
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,505 |
shubhamitra/TinyBERT_General_4L_312D-finetuned-toxic-classification | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
tags:
- generated_from_trainer
model-index:
- name: TinyBERT_General_4L_312D-finetuned-toxic-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyBERT_General_4L_312D-finetuned-toxic-classification
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 498 | 0.0483 | 0.7486 | 0.8563 | 0.9171 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,504 |
Danitg95/autotrain-kaggle-effective-arguments-1086739296 | [
"Adequate",
"Effective",
"Ineffective"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Danitg95/autotrain-data-kaggle-effective-arguments
co2_eq_emissions: 5.2497206864306065
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1086739296
- CO2 Emissions (in grams): 5.2497206864306065
## Validation Metrics
- Loss: 0.744236171245575
- Accuracy: 0.6719238613188308
- Macro F1: 0.5450301061253738
- Micro F1: 0.6719238613188308
- Weighted F1: 0.6349879540623229
- Macro Precision: 0.6691326843926052
- Micro Precision: 0.6719238613188308
- Weighted Precision: 0.6706209016443158
- Macro Recall: 0.5426627824078865
- Micro Recall: 0.6719238613188308
- Weighted Recall: 0.6719238613188308
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Danitg95/autotrain-kaggle-effective-arguments-1086739296
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Danitg95/autotrain-kaggle-effective-arguments-1086739296", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Danitg95/autotrain-kaggle-effective-arguments-1086739296", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,463 |
scales-okn/entity-resolution | null | Entry not found | 15 |
XSY/albert-base-v2-imdb-calssification | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: albert-base-v2-imdb-calssification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93612
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-imdb-calssification
label_0: negative
label_1: positive
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1983
- Accuracy: 0.9361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.26 | 1.0 | 1563 | 0.1983 | 0.9361 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,658 |
abdelkader/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declined",
"carry_on",
"change_accent",
"change_ai_name",
"change_language",
"change_speed",
"change_user_name",
"change_volume",
"confirm_reservation",
"cook_time",
"credit_limit",
"credit_limit_change",
"credit_score",
"current_location",
"damaged_card",
"date",
"definition",
"direct_deposit",
"directions",
"distance",
"do_you_have_pets",
"exchange_rate",
"expiration_date",
"find_phone",
"flight_status",
"flip_coin",
"food_last",
"freeze_account",
"fun_fact",
"gas",
"gas_type",
"goodbye",
"greeting",
"how_busy",
"how_old_are_you",
"improve_credit_score",
"income",
"ingredient_substitution",
"ingredients_list",
"insurance",
"insurance_change",
"interest_rate",
"international_fees",
"international_visa",
"jump_start",
"last_maintenance",
"lost_luggage",
"make_call",
"maybe",
"meal_suggestion",
"meaning_of_life",
"measurement_conversion",
"meeting_schedule",
"min_payment",
"mpg",
"new_card",
"next_holiday",
"next_song",
"no",
"nutrition_info",
"oil_change_how",
"oil_change_when",
"oos",
"order",
"order_checks",
"order_status",
"pay_bill",
"payday",
"pin_change",
"play_music",
"plug_type",
"pto_balance",
"pto_request",
"pto_request_status",
"pto_used",
"recipe",
"redeem_rewards",
"reminder",
"reminder_update",
"repeat",
"replacement_card_duration",
"report_fraud",
"report_lost_card",
"reset_settings",
"restaurant_reservation",
"restaurant_reviews",
"restaurant_suggestion",
"rewards_balance",
"roll_dice",
"rollover_401k",
"routing",
"schedule_maintenance",
"schedule_meeting",
"share_location",
"shopping_list",
"shopping_list_update",
"smart_home",
"spelling",
"spending_history",
"sync_device",
"taxes",
"tell_joke",
"text",
"thank_you",
"time",
"timer",
"timezone",
"tire_change",
"tire_pressure",
"todo_list",
"todo_list_update",
"traffic",
"transactions",
"transfer",
"translate",
"travel_alert",
"travel_notification",
"travel_suggestion",
"uber",
"update_playlist",
"user_name",
"vaccines",
"w2",
"weather",
"what_are_your_hobbies",
"what_can_i_ask_you",
"what_is_your_name",
"what_song",
"where_are_you_from",
"whisper_mode",
"who_do_you_work_for",
"who_made_you",
"yes"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7713
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2831 | 0.7426 |
| 3.785 | 2.0 | 636 | 1.8739 | 0.8335 |
| 3.785 | 3.0 | 954 | 1.1525 | 0.8926 |
| 1.6894 | 4.0 | 1272 | 0.8569 | 0.91 |
| 0.897 | 5.0 | 1590 | 0.7713 | 0.9174 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,890 |
elozano/tweet_offensive_eval | [
"Non-Offensive",
"Offensive"
] | ---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "You're a complete idiot!"
example_title: "Offensive"
- text: "I am tired of studying for tomorrow's exam"
example_title: "Non-Offensive"
---
| 226 |
sgugger/finetuned-bert-mrpc | [
"equivalent",
"not equivalent"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model_index:
- name: finetuned-bert-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metric:
name: F1
type: f1
value: 0.8791946308724832
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4917
- Accuracy: 0.8235
- F1: 0.8792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5382 | 1.0 | 230 | 0.4008 | 0.8456 | 0.8893 |
| 0.3208 | 2.0 | 460 | 0.4182 | 0.8309 | 0.8844 |
| 0.1587 | 3.0 | 690 | 0.4917 | 0.8235 | 0.8792 |
### Framework versions
- Transformers 4.9.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.8.1.dev0
- Tokenizers 0.10.1
| 1,749 |
yoshitomo-matsubara/bert-base-uncased-rte | null | ---
language: en
tags:
- bert
- rte
- glue
- torchdistill
license: apache-2.0
datasets:
- rte
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on RTE dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/rte/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
| 822 |
Intel/bert-base-uncased-mrpc-int8-qat | [
"0",
"1"
] | ---
language: en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- QuantizationAwareTraining
datasets:
- mrpc
metrics:
- f1
---
# INT8 BERT base uncased finetuned MRPC
### QuantizationAwareTraining
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Intel/bert-base-uncased-mrpc](https://huggingface.co/Intel/bert-base-uncased-mrpc).
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9142|0.9042|
| **Model size (MB)** |107|418|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/bert-base-uncased-mrpc-int8-qat',
)
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- train_batch_size: 8
- eval_batch_size: 8
- eval_steps: 100
- load_best_model_at_end: True
- metric_for_best_model: f1
- early_stopping_patience = 6
- early_stopping_threshold = 0.001
| 1,243 |
jenspt/bert_regression | [
"LABEL_0"
] | Entry not found | 15 |
waboucay/camembert-large-finetuned-xnli_fr_3_classes | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 85.8 | 85.9 |
| test | 84.2 | 84.3 | | 367 |
Maxbnza/country-recognition | [
"Austria",
"Belgium",
"Denmark",
"Finland",
"France",
"Germany",
"Israel",
"Italy",
"Netherlands",
"Norway",
"Others",
"Poland",
"Portugal",
"Saudi Arabia",
"South Africa",
"Spain",
"Sweden",
"Switzerland",
"Turkey",
"United Arab Emirates",
"United Kingdom",
"United States"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Maxbnza/autotrain-data-address-training
co2_eq_emissions: 141.11976199388627
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1062136864
- CO2 Emissions (in grams): 141.11976199388627
## Validation Metrics
- Loss: 0.10147109627723694
- Accuracy: 0.9859325979151907
- Macro F1: 0.9715036017680622
- Micro F1: 0.9859325979151907
- Weighted F1: 0.9859070541468058
- Macro Precision: 0.9732956651937184
- Micro Precision: 0.9859325979151907
- Weighted Precision: 0.9860574596777458
- Macro Recall: 0.970199341807239
- Micro Recall: 0.9859325979151907
- Weighted Recall: 0.9859325979151907
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Maxbnza/autotrain-address-training-1062136864
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Maxbnza/autotrain-address-training-1062136864", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Maxbnza/autotrain-address-training-1062136864", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,421 |
IMSyPP/hate_speech_slo | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
pipeline_tag: text-classification
inference: true
widget:
- text: "Sem Mark in živim v Ljubljani. Sem doktorski študent na Mednarodni podiplomski šoli Jožefa Stefana."
language:
- sl
license: mit
---
# Hate Speech Classifier for Social Media Content in Slovenian Language
A monolingual model for hate speech classification of social media content in Slovenian language. The model was trained on 50,000 Twitter comments and tested on an independent test set of 10,000 Twitter comments. It is based on multilingual CroSloEngual BERT pre-trained language model.
## Tokenizer
During training the text was preprocessed using the original CroSloEngual BERT tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | 879 |
ItcastAI/bert_cn_finetuning | null | Entry not found | 15 |
NYTK/sentiment-hts5-hubert-hungarian | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
language:
- hu
tags:
- text-classification
license: gpl
metrics:
- accuracy
widget:
- text: "Jó reggelt! majd küldöm az élményhozókat :)."
---
# Hungarian Sentence-level Sentiment Analysis model with huBERT
For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained model used: huBERT
- Finetuned on Hungarian Twitter Sentiment (HTS) Corpus
- Labels: 1, 2, 3, 4, 5
## Limitations
- max_seq_length = 128
## Results
| Model | HTS2 | HTS5 |
| ------------- | ------------- | ------------- |
| huBERT | 85.55 | **68.99** |
| XLM-RoBERTa| 85.56 | 85.56 |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {Improving Performance of Sentence-level Sentiment Analysis with Data Augmentation Methods},
booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)},
year = {2021},
publisher = {IEEE},
address = {Online},
author = {{Laki, László and Yang, Zijian Győző}}
pages = {417--422}
}
``` | 1,148 |
TransQuest/monotransquest-da-ro_en-wiki | [
"LABEL_0"
] | ---
language: ro-en
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-ro_en-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 5,401 |
Wiirin/BioBERT-finetuned-PubMed-FoodCancer | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
baykenney/bert-base-gpt2detector-random | [
"Human",
"Machine"
] | Entry not found | 15 |
boychaboy/SNLI_bert-large-cased | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
bvanaken/CORe-clinical-mortality-prediction | [
"0",
"1"
] | ---
language: "en"
tags:
- bert
- medical
- clinical
- mortality
thumbnail: "https://core.app.datexis.com/static/paper.png"
---
# CORe Model - Clinical Mortality Risk Prediction
## Model description
The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf).
It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.
This model checkpoint is **fine-tuned on the task of mortality risk prediction**.
The model expects patient admission notes as input and outputs the predicted risk of in-hospital mortality.
#### How to use CORe Mortality Risk Prediction
You can load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-mortality-prediction")
model = AutoModelForSequenceClassification.from_pretrained("bvanaken/CORe-clinical-mortality-prediction")
```
The following code shows an inference example:
```
input = "CHIEF COMPLAINT: Headaches\n\nPRESENT ILLNESS: 58yo man w/ hx of hypertension, AFib on coumadin presented to ED with the worst headache of his life."
tokenized_input = tokenizer(input, return_tensors="pt")
output = model(**tokenized_input)
import torch
predictions = torch.softmax(output.logits.detach(), dim=1)
mortality_risk_prediction = predictions[0][1].item()
```
### More Information
For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/).
### Cite
```bibtex
@inproceedings{vanaken21,
author = {Betty van Aken and
Jens-Michalis Papaioannou and
Manuel Mayrdorfer and
Klemens Budde and
Felix A. Gers and
Alexander Löser},
title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration},
booktitle = {Proceedings of the 16th Conference of the European Chapter of the
Association for Computational Linguistics: Main Volume, {EACL} 2021,
Online, April 19 - 23, 2021},
publisher = {Association for Computational Linguistics},
year = {2021},
}
``` | 2,431 |
echarlaix/bert-base-uncased-qqp-f87.8-d36-hybrid | null | ---
language: en
license: apache-2.0
tags:
- text-classification
datasets:
- qqp
metrics:
- F1
---
## bert-base-uncased model fine-tuned on QQP
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the linear layers contains **36%** of the original weights.
The model contains **50%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
<div class="graph"><script src="/echarlaix/bert-base-uncased-qqp-f87.8-d36-hybrid/raw/main/model_card/density_info.js" id="70162e64-2a82-4147-ac7a-864cfe18a013"></script></div>
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on task, and distilled from the model [textattack/bert-base-uncased-QQP](https://huggingface.co/textattack/bert-base-uncased-QQP).
This model is case-insensitive: it does not make a difference between english and English.
A side-effect of block pruning is that some of the attention heads are completely removed: 54 heads were removed on a total of 144 (37.5%).
<div class="graph"><script src="/echarlaix/bert-base-uncased-qqp-f87.8-d36-hybrid/raw/main/model_card/pruning_info.js" id="f4fb8229-3e66-406e-b99f-f771ce6117c8"></script></div>
## Details of the QQP dataset
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| QQP | train | 364K |
| QQP | eval | 40K |
### Results
**Pytorch model file size**: `377MB` (original BERT: `420MB`)
| Metric | # Value |
| ------ | --------- |
| **F1** | **87.87** |
| 1,650 |
monologg/koelectra-small-finetuned-intent-cls | [
"(일반)제품 주문",
"(현금)영수증문의",
"0인분 용량 문의",
"1인1잔문의",
"1인메뉴문의",
"1인분배달문의",
"1인식사자리요청",
"2박이상요금운의",
"3구4구 테이블 문의",
"AS가능문의",
"AS기간문의",
"MD제품문의",
"OO금액의매물문의",
"OO아파트매물문의",
"OO지역매물문의",
"OO지역상가건물매매매물문의",
"OO지역상가사무실임차문의",
"OO학군OO학교인근의매물문의",
"SPF지수",
"cctv설치유무문의",
"null",
"가게앞주차여부",
"가게이름 문의",
"가격 단위 문의",
"가격 문의",
"가격 변동(오름내림) 문의",
"가격 비교",
"가격 차이 문의",
"가격 차이에 대한 문의",
"가격대별문의",
"가격문의",
"가격변경문의",
"가격별 떡 주문",
"가격불만요청",
"가격비교",
"가격비교문의",
"가격에대한문의",
"가격임의할인요구",
"가격제한시선택문의",
"가격책정이유문의",
"가격표 부착문의",
"가격할인 문의",
"가격할인 요청",
"가격할인 조건",
"가는방법문의",
"가르치는선생님문의",
"가방가죽관리문의",
"가방각인문의",
"가방끈교체문의",
"가방끈문의",
"가방모조품문의",
"가방방수여부",
"가방브랜드문의",
"가방사이즈에따른제품요청",
"가방사이즈에따른제품요청문의",
"가방소재문의",
"가방소재특성문의",
"가방스타일문의",
"가방외제품문의",
"가방제품문의요청",
"가상계좌 납부자명 문의",
"가위요청",
"가입 조회",
"가정내 세탁방법 문의",
"각인문의",
"간식,급식비문의",
"간식반입가능문의",
"간이영수증요청",
"감면/면제 대상",
"감사문구 박스 표시 요청",
"갓 구운 빵 문의요청",
"강습 문의",
"강의시간문의",
"개발계획문의",
"개봉여부문의",
"개인 기기 반입 문의",
"개인 신발장 문의",
"개인 용품 사용 요청",
"개인 위생 도구 요청",
"개인 짐 보관 요청",
"개인별진도차이문의",
"개인정보 확인 후 세탁물 요청",
"객실교체문의",
"객실문의",
"갱신/재발급",
"거래가",
"거래가문의",
"거래의뢰",
"거리문의",
"거울위치문의",
"거점대학교문의",
"거주인원문의",
"거주지역매장문의연락처",
"건강보조식품 문의",
"건물규모",
"건물내사무실위치",
"건물내위치",
"건의사항",
"게임 가맹점 문의",
"게임 설치 여부 문의",
"게임 설치 여부문의",
"게임 설치 요청",
"게임 실행 오류 수정 요청",
"견본제품문의",
"결석시부모님연락서비스유무문의",
"결석시연락서비스유무문의",
"결제 문의",
"결제 문의요청",
"결제 방법 문의",
"결제 수단 문의",
"결제 시기",
"결제 시점 문의",
"결제 요청",
"결제문의",
"결제방법",
"결제방법문의",
"결제방식문의",
"결제방식선택",
"결제수단 문의",
"결제요청",
"계산서요청",
"계약결제요청",
"계약금문의",
"계약금확인문의",
"계약날짜시간정하기",
"계약문의",
"계약방법문의",
"계약서작성문의",
"계약서작성장소문의",
"계약시기문의",
"계약완료확인",
"계약요청",
"계약일정문의",
"계열사문의",
"계절메뉴문의",
"계절메뉴주문문의",
"계절상품문의",
"계절한정상품문의",
"계좌 이체 문의",
"계좌번호요청",
"계좌이체 문의",
"계좌이체가능문의",
"계좌이체문의",
"고객사이즈제시에따른제품선택문의",
"고기 썰기 주문",
"고기 용량 단위 문의",
"고사떡",
"고지서 발송 여부",
"고품질 제품 주문",
"공 요청",
"공과금포함여부",
"공부방유무문의",
"공시지가문의",
"공실여부문의",
"공임비 문의",
"공통",
"공항픽업서비스문의",
"공휴일 영업 문의",
"공휴일영업문의",
"과일 익은 상태 문의",
"과일당도 문의",
"과정별교재문의",
"과태료",
"과표",
"관광지문의",
"관리비문의",
"관리비유무문의",
"관할 사업소 전화번호",
"광고 제품 문의",
"광고나PPL제품문의",
"광고제품문의",
"교육과정추천",
"교육비문의",
"교육비에교재비포함문의",
"교육비에도복비포함문의",
"교육장소위치문의",
"교재문의",
"교재비문의",
"교재비할인문의",
"교재진도문의",
"교재판매문의",
"교통이편리한지",
"교통통제문의",
"교통편의문의",
"교환기간에따른교환요구",
"교환환불문의",
"구매제품과코디할의류제품추천요청",
"구매제품할인문의",
"구비서류",
"구입제품착용여부에따른포장요구",
"구입제품포장문의",
"구제품세탁여부문의",
"구조별매물문의",
"국 판매 문의",
"국물따로요청",
"국물요청",
"국물제공문의",
"국산제품 요청",
"굽높이문의",
"굽문의",
"귀뚫어주는지문의",
"귀중품보관문의",
"그랜드피아노유무확인",
"근처장볼곳문의",
"글램핑장크기문의",
"금매입문의",
"금시세문의",
"금액대별매물문의",
"금액문의",
"금연 구역 문의",
"금연석 문의",
"금연석 요청",
"금함량문의",
"급수별점수문의",
"급식.간식서비스유무문의",
"기간에따른수업시간표문의",
"기계 사용방법 문의",
"기능성 제품 문의",
"기능성 헤어 제품",
"기능성제품문의",
"기능에따른제품문의",
"기본등급요구",
"기본반찬요구",
"기본반찬종류문의",
"기장 추가 가격 문의",
"기타 고기 문의",
"기타 문의",
"기타 요청",
"기타문의",
"기타서비스문의",
"기프티콘쿠폰멤버실결제요구",
"기프티콘쿠폰멤버십결제요구",
"기프티콘쿠폰멤버십메뉴변경문의",
"기프티콘쿠폰멤버십사용문의",
"기한내수선요청",
"긴급세탁 요청",
"긴급수선문의",
"길찾기",
"난방시설 문의",
"남성 전용 미용실 문의",
"남은음료테이크아웃문의",
"남은음식포장요구",
"남은커피가루문의",
"납부기한",
"납부월 확인(격월)",
"납품문의",
"낮은 가격의 제품 문의",
"낱개 포장 문의",
"낱개 포장 제품 요청",
"내구성문의",
"내진설계질문",
"냅킨물티슈요구",
"냅킨요구",
"냅킵물티슈요구",
"냉난방문의",
"냉난방시설문의",
"냉동인지생물인지문의",
"냉방조절",
"냉온수문의",
"네이버페이",
"네이버페이 가능 문의",
"넵킨물수건요청",
"누수 탐지 신청",
"누적포인트사용",
"늘먹던음료요구",
"다른계절상품문의",
"다른그릇요청",
"다른매물문의",
"다른매장과가격비교문의",
"다른매장교환문의",
"다른매장위치문의",
"다른매장재고여부확인요청",
"다른브랜드제품문의",
"다른사람제품확인요청",
"다른사이즈피팅문의",
"다른악기수업문의",
"다음 염색 필요 시기 문의",
"단계기간문의",
"단골손님임을주장",
"단기어학연수기간문의",
"단수",
"단수가능한나이문의",
"단수따는기간문의",
"단수문의",
"단순 구매 요청",
"단순 구매요청",
"단순매물문의",
"단체 문의",
"단체룸 문의",
"단체석 문의",
"단체석문의",
"단체석요구",
"단체주문 문의",
"단체주문문의",
"단품 문의",
"단품과세트메뉴차이문의",
"단품구매시행사사은품문의",
"답례품",
"당구 종목 문의",
"당구장 위치 문의",
"당일 떡 문의",
"대기 문의",
"대기 시간 문의",
"대기 요청",
"대기번호확인요청",
"대기시간 문의",
"대기시간문의",
"대기자수문의",
"대량구매시 택배 가능 문의",
"대리신청가능여부",
"대실문의",
"대실비용문의",
"대실사용시간문의",
"대여문의",
"대체수업문의",
"대출관련",
"대표상품 문의",
"대학진로문의",
"대학진학률문의",
"대회참가견학문의",
"더치커피티백판매문의",
"도복가격",
"도장앞안전성문의",
"도장크기.컨디션문의",
"독서실 내 소음 문의",
"독서실 내 시설 문의",
"독서실 이용 문의",
"돌떡",
"동 제품 타 브랜드 문의",
"동류타제품문의",
"동서남북향문의",
"동시 시술 문의",
"동시 시술 요청",
"동제품타사이즈문의",
"동제품타색상문의",
"두과목이상수강문의",
"두메뉴의차이에대한질문",
"두피 상태 문의",
"두피 케어 서비스 가격 문의",
"두피 케어 서비스 문의",
"두피 케어 서비스 요청",
"드라이 및 스타일링",
"드라이방법 설명 요청",
"드라이크리닝여부문의",
"등기부등본관련문의",
"등기부등본확인요청",
"등록 기간 문의",
"등록 상황 문의",
"등록/이전 확인",
"등록가능연령문의",
"등록방법문의",
"등록비용문의",
"등록인원수에대한할인문의",
"등원시차량이학생을기다려주는시간문의",
"디너메뉴문의",
"디자인 또는 리폼 가능 문의",
"디자인문의",
"디자인별제품추천요청",
"디자인선택",
"디저트메뉴문의",
"땅매도의뢰",
"땅매수문의",
"땅의용도",
"땅의형태문의",
"땅주인연락처요청",
"땅추천요청",
"떡 구성 문의",
"떡 나오는 시간 문의",
"떡 배달 요청",
"떡 변질 문의",
"떡 보관 문의",
"떡 완료 연락 요청",
"떡 완성 소요시간 문의",
"떡 이름 각인 가능 여부 문의",
"떡 재고 문의",
"떡 재료에 대한 문의",
"떡 절단 요청",
"떡 제작 형태",
"떡 제조 요청",
"떡 제조시간 문의",
"떡 종류 문의",
"떡 주문",
"떡 주문 제작 문의",
"떡 추천 요청",
"떡 포장 요청",
"떡 혼합 주문 문의",
"떡국 떡떡볶이 떡",
"떡볶이 떡 문의",
"뜨거운차가운물요구",
"라면기계문의",
"라운지위치문의",
"런치메뉴주문문의",
"런치세트문의",
"런치타임문의",
"레벨별교재문의",
"레벨별구분문의",
"레벨별반구분문의",
"레벨테스트문의요청",
"레시피 문의",
"렌탈문의",
"렌터카문의",
"룸가격차이문의",
"룸가능인원문의",
"룸교체문의",
"룸서비스문의",
"룸시설문의",
"룸업그레이드요청",
"룸요구",
"룸위치문의",
"룸컨디션문의",
"룸타입문의",
"리필 가능 여부 문의",
"리필문의",
"리필제품",
"리필제품 가격",
"마감 시간 문의",
"마감시간 문의",
"마감시간문의",
"마네킹착용제품문의",
"마당문의",
"마세 가능 문의",
"마우스 오류 수정 요청",
"마지막타임시간문의",
"마트나시장은가까운지",
"만두 구매 문의",
"만드는방법문의",
"많이판매되지않는제품문의",
"말소등록",
"맛 문의",
"맛 비교",
"맛에 대한 문의",
"맛에 대한 문의요청",
"맛에대한문의",
"맛에대한질문",
"맛정도조절요청",
"맛조절요청",
"맛집문의",
"맞춤 떡 문의",
"맞춤기간문의",
"맞춤문의",
"매는방식에따른제품요청",
"매매가문의",
"매매가임대가",
"매매가임대가조정문의",
"매매가중개보수문의",
"매매매물문의",
"매매시기문의",
"매매중개보수문의",
"매몰보기요청",
"매물거래확인",
"매물건문의",
"매물나온땅보러가기요청",
"매물나온시기",
"매물나온이유문의",
"매물나온집개수문의",
"매물나온집보기요청",
"매물나온집언제볼수있는지문의",
"매물내부사진요청",
"매물담당자관련문의",
"매물문의",
"매물보기요청",
"매물보는데걸리는시간문의",
"매물보는시간약속정하기",
"매물보러가는시간약속정하기",
"매물조건문의",
"매물주소문의",
"매운정도조절요청",
"매일달라지는오늘의메뉴문의",
"매장 내 사용제품 가격 및 사용법 문의",
"매장 내 사용제품 구입가능 문의",
"매장 문의",
"매장 안내정보 문의",
"매장 전화번호",
"매장 주차장 문의",
"매장구경요청",
"매장내식사가능문의",
"매장내에서일회용사용문의",
"매장내위치문의",
"매장내자체수리여부",
"매장내자체수선여부",
"매장명뜻문의",
"매장상호문의",
"매장에서착용문의",
"매장연락처문의",
"매장연락처요청",
"매장오픈문의",
"매장위치문의",
"매장이용문의",
"매장제품구경",
"매장제품콘셉트문의",
"매장확인",
"매직 문의",
"매직 요청",
"맵지 않은 반찬 요청",
"머그컵요구",
"먹는방법문의",
"메뉴 문의",
"메뉴가나오는것을재촉하는상황",
"메뉴교환시추가가격문의",
"메뉴만든시간",
"메뉴문의",
"메뉴사이즈문의",
"메뉴서비스요청",
"메뉴세팅요청",
"메뉴있는스티커요청",
"메뉴정보문의",
"메뉴주문",
"메뉴주문문의",
"메뉴주문확인",
"메뉴차이문의",
"메뉴추가문의",
"메뉴추가문의요청",
"메뉴추가주문",
"메뉴추천요구",
"메뉴추천요청",
"메뉴판 요청",
"메뉴판문의",
"메뉴판에있는음식에대한질문",
"메뉴판요구",
"메뉴판요청",
"메뉴할인문의",
"면적문의",
"면적별매물문의",
"명의자 변경",
"명절음식 문의",
"명품스타일가방문의",
"명품정품문의",
"명함 요청",
"명함요청",
"모닝콜요청",
"모바일페이결제",
"모발 상태 문의",
"모발 케어 서비스 가격 문의",
"모발 케어 서비스 요청",
"모발관리 추천 서비스 문의",
"모발관리 추천 제품 요청",
"모양 문의",
"모의고사관련문의",
"모임고객문의",
"무게별가방선택문의",
"무료음료제공문의",
"무료주차시간문의",
"무료쿠폰사용",
"무선 인터넷 문의",
"무선데이터 문의",
"무이자할부요구",
"무인시스템 오류 수정 요청",
"무인시스템 위치 문의",
"무인시스템 이용 문의",
"문 닫는 시간 문의",
"문 여는 시간 문의",
"문닫는 시간 문의",
"문닫는 시간문의",
"문자안내요청",
"물물컵",
"물요청",
"물우유량요구",
"물티슈요구",
"물품위치문의",
"미끄럼방지제품문의",
"미대준비비용문의",
"미리주문메뉴문의",
"미성년자 이용 문의",
"미술도구문의",
"미술분야문의",
"바비큐숯불문의",
"바비큐시설문의",
"바비큐재료문의",
"반반메뉴문의",
"반별학생구성문의",
"반별학생수문의",
"반별학생의학교와학년문의",
"반찬 가격 문의",
"반찬 구비시간 문의",
"반찬 문의",
"반찬 세트 문의",
"반찬 유무 문의",
"반찬 종류 문의",
"반찬 주문",
"반찬 추천 요청",
"반찬명",
"반찬명 문의",
"반찬재료 문의",
"반찬판매문의",
"받는쿠폰수문의",
"발급 수수료",
"발급기간",
"발급문의",
"발급서류",
"발색 문의",
"밥요청",
"밥포함여부문의",
"방 크기 문의",
"방內가전제품문의",
"방문 시간 문의",
"방문가능시간문의",
"방문수업가능문의",
"방문수업과학원수업차이문의",
"방문시간문의",
"방문의",
"방문인",
"방수제품문의",
"방음문의",
"방종류문의",
"방크기문의",
"방학특화수업문의",
"방학프로그램문의",
"배달 문의",
"배달 문의요청",
"배달가능금액문의",
"배달가능문의",
"배달가능범위문의",
"배달가능여부문의",
"배달과포장가격차이문의",
"배달문의",
"배달배송 문의",
"배달비 문의",
"배달비문의",
"배달서비스가능문의",
"배달시간문의",
"배달시간요청",
"배달앱문의",
"배달용기문의",
"배달용기변경요청",
"배달음식문의",
"배달음식점여부문의",
"배달장소범위문의",
"배달주문",
"배달주문문의",
"배달출발도착알림요청",
"배송 문의",
"배송가능지역 문의",
"배송기간문의",
"배송배달 문의",
"배송포장 문의",
"배차간격",
"백일 떡",
"버너점화요청",
"버스노선",
"번호판 교체",
"번호판 등록",
"베드문의",
"베스트 반찬 문의",
"베스트메뉴문의",
"베스트메뉴문의추천요청",
"베스트상품문의",
"베이커리정보문의",
"베이커리제조문의",
"별도 사물함 문의",
"별도 실내화 여부/착용 문의",
"보강수업문의",
"보관 요청",
"보관기간 문의",
"보관방법 문의",
"보관방법문의",
"보관법 문의",
"보관상문의",
"보관상태 문의",
"보관위치문의",
"보러가는매물개수문의",
"보온기능신발문의",
"보유한학원차량숫자문의",
"보조제품추천요청",
"보증금과월임대료조정이가능한지문의",
"보증금월세시설비권리금문의",
"복구가능여부 문의",
"복용기간 문의",
"복용방법 문의",
"복합결제 문의",
"본사 문의",
"본인 방문 또는 샘플 지참 가능 문의",
"본점 문의",
"본점지점문의",
"봉사료문의",
"봉투 요청",
"봉툿값 문의",
"부가물품 문의",
"부대시설위치문의",
"부대시설이용문의",
"부대시설이용시간문의",
"부대시설이용요금문의",
"부대시설크기문의",
"부동산거래세금문의(매매거래일때)",
"부동산거래세금비율금액",
"부동산관련법정책",
"부동산관련정보문의",
"부동산근처주차문의",
"부분판매 가능유무",
"부위별 고기 문의",
"부위별 고기 주문",
"부작용 문의",
"부진한부분특화수업문의",
"분실물",
"분양권가격(분양권매입시바로지불해야할총금액)",
"분양권관련문의",
"분양권프리미엄",
"분양신청",
"분할결제",
"불법주정차",
"불특정상가건물임대매매문의",
"불특정지역상가건물매매매물문의",
"불편신고",
"뷔페이용문의",
"뷰(경관)문의",
"브랜드 문의",
"브랜드 선호도 문의",
"브랜드별제품비교문의",
"브랜드종류문의",
"브레이크타임문의",
"브릿지 요청",
"비내복약 사용방법 문의",
"비닐봉지 요청",
"비상구 문의",
"비상구 위치 문의",
"비상시탈출방법문의",
"비용",
"비용 책정기준",
"비재고 품목 입고 문의",
"비치품목문의",
"비품 요청",
"비품문의",
"비품추가문의",
"빅사이즈제품문의",
"빈자리 여부 문의",
"빈자리문의",
"빌라매물문의",
"빠른매도의뢰",
"빨대",
"빨대/냅킨등비품요청",
"빨래서비스문의",
"빨리되는메뉴문의",
"빵 나오는 시간 문의",
"빵 분리 요청문의",
"빵 소진 문의",
"빵 외 기타 제품군 문의주문",
"빵 재고 문의",
"빵 종류 문의",
"빵 주문",
"빵 추천 요청",
"빵식재료선택문의",
"빵종류문의",
"뿌리 염색 가능 문의",
"사계절상품문의",
"사무실매물추천문의",
"사무실이용에맞는매물의뢰",
"사무실임차의뢰",
"사용기간 문의",
"사용량 문의",
"사용목적에따른선택문의",
"사용방법 문의",
"사용횟수 문의",
"사은품문의",
"사이드메뉴정보문의",
"사이즈-업문의요청",
"사이즈문의",
"사이즈별진열문의",
"사이즈요구",
"사이즈재고문의",
"사이즈적합여부문의",
"사이즈제시에따른제품선택문의",
"사이즈조절가능문의",
"사이즈측정문의",
"사이즈확인문의",
"사진 제시하면서 동일한 펌 가능한 지 문의",
"사진관",
"삼성 페이 가능 문의",
"삼성페이 가능 문의",
"삼성페이 가능 여부 문의",
"삼성페이카카오페이 문의",
"삼푸 린스 시 별도 추가 가격 문의",
"상가건물매도요청",
"상가건물매수요청",
"상가수익문의",
"상가임차의뢰",
"상가주택문의",
"상권문의",
"상담문의",
"상담선생님확인",
"상담예약문의",
"상비약문의",
"상품 고르는 방법 문의",
"상품 상태 문의",
"상품권 결제 가능 여부 문의",
"상품권 결제 문의",
"상품권 결제 여부",
"상품권 결제문의",
"상품권 문의",
"상품권 사용 가능 문의",
"상품권결제",
"상품권사용문의",
"상품권사용후차액결제문의",
"상한 떡 문의",
"새음식포장요구",
"새제품요청",
"새치 염색 문의",
"색상 문의",
"색상문의",
"색상비교선택문의",
"색상에대한문의",
"샘플 문의요청",
"샘플로 가능한 의류 문의",
"생산지 문의",
"생일떡",
"샴푸 서비스 문의",
"샷토핑추가시추가금액문의",
"서비스 요청",
"서비스문의",
"서비스변경주문문의",
"서빙범위문의",
"서빙요구",
"선결제문의",
"선물 대상 적합성 문의",
"선물 포장 문의",
"선물 포장 요청",
"선물용 문의주문",
"선물용 추천",
"선물용 포장 요청",
"선물용추천요청",
"선물포장 문의요청",
"선물포장문의요청",
"선물포장여부",
"선물할대상에따른제품문의",
"선생님 근무지 문의",
"선생님 변경 가능 문의",
"선생님 요청",
"선생님 추천 요청",
"선생님경력문의",
"선생님나이문의",
"선생님별 스케줄 문의",
"선생님성격문의",
"선생님성별문의",
"선생님인원수문의",
"선생님인적사항관련문의",
"선생님전공문의",
"선생님확인",
"선지급 가능 여부 문의",
"선택제품요청",
"선행학습문의",
"선호하는어학연수나라문의",
"선후불 문의",
"선후불 요청",
"섭취기한 문의",
"섭취방법 문의",
"성별 분리룸 유무 문의",
"성별 제품 문의",
"성별선호에따른선택문의",
"성별제품문의",
"성분",
"성인등록문의",
"성인반문의",
"성인반시간표문의",
"성인석 문의",
"세대수문의",
"세일 품목",
"세일기간문의",
"세일품목",
"세일품목문의",
"세척비용문의",
"세척서비스문의",
"세탁 소요시간 문의",
"세탁 소요일 문의",
"세탁 후 변형 문의",
"세탁가능 여부 문의",
"세탁가능여부 문의",
"세탁가능여부문의",
"세탁물 건조 문의",
"세탁물 보관방법 문의",
"세탁물 세탁요청",
"세탁물 수거 및 배달 가능한 개수 문의",
"세탁물 수거 및 배달가능 문의",
"세탁물 수거 및 배달에 따른 가격 문의",
"세탁물 오염정도 상호확인 요청",
"세탁물 종류에 따른 다른가게 문의",
"세탁물 종류에 따른 보관방법 문의",
"세탁물 종류확인 요청",
"세탁방법 문의",
"세탁시유의점문의",
"세트가격 문의",
"세트메뉴구성문의",
"세트메뉴문의",
"세트메뉴서비스문의",
"세트메뉴종류문의",
"세트메뉴주문",
"세트메뉴중메뉴교환문의요청",
"세트메뉴할인문의",
"세트상품 구성",
"세트제품개별구매문의",
"세트제품문의",
"셀프서비스문의",
"셀프코너리필요청",
"셀프코너문의",
"셔틀버스문의",
"셔틀버스승하차장문의",
"셔틀버스시간문의",
"소독약문의",
"소매판매 문의",
"소방안전문의",
"소비자 특성에 따른 추천 요청",
"소스 판매 문의",
"소스정보문의",
"소요 시간 문의",
"소요비용문의",
"소요시간 문의",
"소요시간문의",
"소유권한",
"소음문제",
"소음문제 제기",
"소재를제시한제품문의",
"소재문의",
"소재의특성문의",
"소재특성문의",
"손상 없는 염색 요청",
"손상 정도에 따른 매직 문의",
"손질 문의",
"쇼핑백 가격 문의",
"쇼핑백 요청",
"쇼핑백문의",
"수강과목문의",
"수강과목별교재문의",
"수강과정에교재교체에대한교재비문의",
"수강기간문의",
"수강대상문의",
"수강등록가능기간문의",
"수강등록가능문의",
"수강료 문의",
"수강생구성문의",
"수강일수변경문의",
"수강자격관련문의",
"수강후다이어트효과",
"수강후효과문의",
"수거, 배달 관련 시간 문의",
"수거일 요청 및 문의",
"수납공간 문의",
"수도요금 정산",
"수령방법",
"수령인",
"수리가되어있는지",
"수선 가능 문의",
"수선 시일 문의",
"수선 완료 연락 문의",
"수선 외 서비스 문의",
"수선 요청",
"수선 후 수선물 보관기간 문의",
"수선가격 문의",
"수선가능문의",
"수선기간문의",
"수선물 수거 및 배달가능문의",
"수선물 종류에 따른 소요시간문의",
"수선물 종류에 따른 수선가격문의",
"수선물 종류에 따른 수선방법 문의",
"수선비용문의",
"수선에 관한 전문가의견 문의",
"수선을 위한 착용 문의",
"수선집 추천 요청",
"수선후결제요청",
"수선후연락문의",
"수선후연락요청",
"수업강의실문의",
"수업과정별수업내용문의",
"수업과정별시간표문의",
"수업교재문의",
"수업교재분량문의",
"수업교재비문의",
"수업방식문의",
"수업별비용문의",
"수업선생님문의",
"수업시간문의",
"수업시간표문의",
"수업요일문의",
"수업일수문의",
"수업적응에관한문의",
"수업진도문의",
"수영장요금문의",
"수입여부에따른요구",
"수입품문의",
"수저요구",
"수저요청",
"수제가방문의",
"수제품문의",
"수제화문의",
"수준별수강가능문의",
"수표 결제 문의",
"수프제공문의",
"수확일 문의",
"숙박문의",
"숙박비용문의",
"숙박유형변경문의",
"숙박인원문의",
"숙소명문의",
"숙제유무문의",
"순한 제품",
"술배달문의/요청",
"스타일링 불만 요청",
"스타일링문의",
"스피커 오류 수정 요청",
"시간추가문의",
"시공사선정유무",
"시럽",
"시럽설탕요구",
"시럽요구",
"시설 문의",
"시설(방구조)문의",
"시설구조문의",
"시설문의",
"시설상태문의",
"시세문의",
"시세변동문의",
"시세비교문의",
"시세조정가능조건거래의뢰",
"시술 서비스 시간 제안 요청",
"시술 시 손상 정도 문의",
"시술 제품 문의",
"시술 중 서비스 가능 문의",
"시술시 손상 정도 문의",
"시술제품 문의",
"시식 가능여부",
"시식문의",
"시식요청",
"시작시기문의",
"시즌별카탈로그요구",
"시착가능문의",
"시험난이도문의",
"식감 문의",
"식감에 대한 문의",
"식기 요청",
"식기교환요청",
"식기류요구",
"식기류쟁반반납",
"식기반납문의",
"식기요구",
"식기위생지적",
"식당 문의",
"식당시간문의",
"식당위치문의",
"식사 관련 문의",
"식사도구요구",
"식사배달요청",
"식사주문",
"식사주문문의",
"식사주문확인",
"식사추가주문",
"식재료공급문의",
"식재료문의",
"식재료재배방식문의",
"신메뉴나온시기",
"신메뉴문의",
"신발과관련된MD구매",
"신분증",
"신상품문의",
"신상품입고문의",
"신선기한 문의",
"신선도 문의",
"신선한 떡 요청",
"신용카드 및 체크카드 결제 가능문의",
"신용카드 및 체크카드 결제 요청",
"신용카드 및 체크카드결제 가능 문의",
"신청장소",
"신호단속문의",
"실내구조문의",
"실내외구분에따른문의",
"실력향상문의",
"싸게파는 물건 문의",
"쌈채소쌈무파채 판매 문의",
"쓰레기처리문의",
"아기용품문의",
"아이들시설문의",
"아파트매물정보",
"아파트명문의",
"악기문의",
"안경착용수강가능문의",
"알러지반응 문의",
"알레르기",
"알레르기유무문의",
"압류 조회/해제",
"앞접시요청",
"애견출입문의",
"애니메이션과정문의",
"애완견출입문의",
"애완동물동반문의",
"앱문의",
"앱주문배달문의",
"앱주문할인문의",
"앱할인문의",
"야외샤워장문의",
"약 미복용에 따른 증상 문의",
"약국 인근 시설 문의",
"약품 처리 요청",
"양념 문의",
"양념/소스별도구매문의",
"양념소스요구",
"양념제품 문의",
"양도세문의",
"양에대한요청",
"양에대한질문",
"어린이메뉴문의",
"어린이사이즈별연령문의",
"어울리는 헤어스타일 추천 요청",
"어플문의",
"어학연수국가문의",
"어학연수국가추천요청",
"어학연수기간문의",
"어학연수문의",
"어학연수비용문의",
"언제볼수있는지문의",
"얼굴 케어 서비스 문의",
"얼룩제거 문의",
"얼음량요구",
"얼음요구",
"얼음요청",
"업무시간",
"업소 관련 문의",
"업종 확인",
"없는메뉴추가주문문의",
"없어진메뉴문의",
"에스프레소샷개수문의",
"에스프레소샷개수요구",
"엘리베이터위치문의",
"엘지멤버십할인문의",
"여권사진",
"여권수수료",
"여분의신발끈포함문의",
"여분키문의",
"여학생부프로그램문의",
"역세권문의",
"연간 이용 등록 요청",
"연락 요청",
"연락처 문의",
"연락처 요청",
"연락처문의",
"연락처제공",
"연락후방문상담가능문의",
"연령 제한",
"연령별 사용 방식 문의",
"연령별 선호 방식 문의",
"연령별 제품 문의",
"연령별운동내용문의",
"연령별적합여부",
"연령에 따른 의약품 요청",
"연령에따른제품추천문의",
"연령에따른제품추천요청",
"연습 시간 요청",
"연식문의",
"연식별매물문의",
"연예인가방문의",
"연예인신발문의",
"연예인펌 문의",
"연체가산금 문의",
"열람실 보기 요청",
"염색 가격 문의",
"염색 문의",
"염색 시 손상 정도 문의",
"염색 요청",
"염색 전 탈색 문의",
"염색 제품 문의",
"염색 종류 문의",
"염색 후 관리 및 유의 사항 문의",
"염색 후 펌 가능 시기 문의",
"염색문의",
"염색제 문의",
"염색제 컬러 문의",
"영수증 발급 요청",
"영수증 버리기",
"영수증 비요청",
"영수증 요청",
"영수증/현금영수증발행요청",
"영수증문의",
"영수증버리기",
"영수증요구",
"영수증요청",
"영수증처리요구",
"영양 가격 문의",
"영양 추가 문의",
"영양제 추가 요청",
"영업 시간 문의",
"영업기간 문의",
"영업기간문의",
"영업문의",
"영업시간",
"영업시간 문의",
"영업시간문의",
"영업일 문의",
"영업일문의",
"옆테이블에서먹는음식이름질문",
"예약 가능 문의",
"예약 문의",
"예약 시간 문의",
"예약 요청",
"예약 주문 문의",
"예약가능인원문의",
"예약메뉴문의",
"예약문의",
"예약문의요청",
"예약배달문의",
"예약시간 문의",
"예약요청",
"예약자리문의",
"예약제품 수령시간",
"예약제품도착시간문의",
"예약제품배송문의",
"예약제품회수문의",
"예약주문",
"예약주문 문의",
"예약주문 시간",
"예약주문 여부",
"예약주문문의",
"예약취소문의",
"예약포장/식사문의",
"예약확인문의",
"오토캠핑장전기사용문의",
"오픈 시간 문의",
"오픈시간 문의",
"오픈시간문의",
"오피스텔매물문의",
"온라인구매금액문의",
"온라인구매문의",
"온라인수업문의",
"옵션문의",
"옵션추가문의",
"옵션추가요구",
"옷감문의",
"옷수선 요청 및 문의",
"와이파이 문의",
"와이파이문의",
"와이파이여부문의",
"완료 여부 문의",
"외국생활관련문의",
"외국인선생님문의",
"외부 음식 반입 여부",
"외부구조문의",
"외상 문의",
"요금 납부 내역",
"요금 납부 방법 문의",
"요금 납부 확인",
"요금 문의",
"요금 할인 문의",
"요일별수업내용문의",
"요일별운동내용문의",
"용도+고기 부위 문의",
"용도+고기 부위 주문",
"용도+고기 손질 주문",
"용도별 고기 문의",
"용도별 고기 주문",
"용도별 썰기 주문",
"용도별 용량 문의",
"용도별가방요청문의",
"용도별적합한땅문의",
"용도에 따른 떡 필요량 문의",
"용도에 따른 식품 문의",
"용도에따른제품추천문의",
"용도에따른제품추천요청",
"용량 문의",
"용량별 고기 문의",
"용량별 고기 주문",
"용품추가문의",
"우유배달서비스가능문의",
"우유변경",
"우천여부에따른문의",
"운동종목별기능성제품문의",
"운영 시간 문의",
"운영과목문의",
"운영기간문의",
"운영시간문의",
"운영프로그램문의",
"운영하는 층에 관한 문의",
"원단문의",
"원두원산지문의",
"원두인스턴트커피로스팅문의",
"원산지 문의",
"원산지문의",
"원산지에따른제품가격문의",
"원어민회화반문의",
"원하는 스타일 나오지 않을때 환불가능 문의",
"원하는 시간에 예약이 가능 문의",
"원하는 펌기술이나 기구가 없을 경우 가능한 미용실 추천 문의",
"원하는자리요구",
"원하는제품재고여부확인",
"월 정액 문의",
"월간 이용 등록 요청",
"월세매물문의",
"월세중개보수문의",
"웨이팅여부문의",
"위생용품 요청",
"위치",
"위치 문의",
"위치문의",
"유기농 문의",
"유기농 여부 문의",
"유명 떡집 문의",
"유사 경우 문의",
"유사제품추천문의",
"유사제품추천요청",
"유의사항 문의",
"유통기한 문의",
"유통기한문의",
"유학국가별수업내용문의",
"유학대비반문의",
"유행 커트 문의",
"유행 펌 문의",
"융자있는집인지",
"음료 문의주문",
"음료 서비스 문의",
"음료 요청",
"음료/술문의",
"음료문의",
"음료반입",
"음료술문의",
"음료술주문",
"음료온도문의",
"음료온도요구",
"음료주문",
"음료추가가격문의",
"음료추가문의",
"음식 가격 문의",
"음식 반입 여부",
"음식 조리 방법 요청",
"음식 조리 시간 문의",
"음식 주문",
"음식 주문 문의",
"음식 주문 방법 문의",
"음식 주문 변경 요청",
"음식 주문 요청",
"음식맛에대한컴플레인",
"음식반찬소스이름문의",
"음식성분문의",
"음식에이물질이있는경우",
"음식온도에대한컴플레인",
"음식이늦게나올때",
"음식이늦게배달올때",
"음식이주문과다른경우",
"음악장르문의",
"의류 외 세탁문의",
"의류 외 수선문의",
"의상과코디할제품문의",
"의상에따른선택문의",
"의상에따른코디문의",
"의약품 구매 요청",
"의약품 요청",
"의약품 추천 요정",
"의약품 추천 요청",
"의자요청",
"이론반시간표문의",
"이물질발견등음식위생지적",
"이바지 떡",
"이벤트 기간 문의",
"이벤트 문의",
"이벤트 상품 문의",
"이벤트기간문의",
"이벤트기간에대한컴플레인",
"이벤트메뉴문의",
"이벤트메뉴컴플레인",
"이벤트문의",
"이벤트서비스메뉴요청",
"이벤트서비스문의",
"이벤트할인",
"이사전에도배장판수리해주는지문의요청",
"이쑤시개문의/요청",
"이용 가격 문의",
"이용 금액 문의",
"이용 방법 문의",
"이용 시간 문의",
"이용 시간 연장",
"이용 요청",
"이용시간 문의",
"이용시간문의",
"이용요금문의",
"이월제품문의",
"이월제품안내문의",
"이전등록",
"인근 병원/약국 관련 문의",
"인근어학원문의",
"인근지리와위치문의",
"인근지리와위치요청",
"인근피아노학원에관한문의",
"인기 반찬 문의",
"인기 상품 문의",
"인기메뉴문의",
"인기부위 문의",
"인기부위 문의주문",
"인기상품 문의",
"인기색상문의",
"인기제품 문의",
"인기제품문의",
"인쇄 가격 문의",
"인원배정 문의",
"인원수에맞는자리요구",
"인원에따른요금문의",
"인터넷가능여부문의",
"인터넷매물정보문의",
"인터넷판매문의",
"일반 제품 문의",
"일반주문",
"일반형(제품명) 주문",
"일부메뉴포장문의",
"일인실, 다인실 분리 문의",
"일일체험 문의",
"일회용컵요구",
"일회용포크나이프요구",
"임대",
"임대가문의",
"임대가조정문의",
"임대기간",
"임대인직접계약문의",
"임차인의희망임대가매물문의",
"임차임대이유문의",
"입고 문의",
"입고시기 문의",
"입고시연락요청",
"입고일 문의",
"입시대비성적문의",
"입시대비프로그램문의",
"입시반대학별과정문의",
"입시반시간표문의",
"입시준비시기문의",
"입실 가능 문의",
"입실 시간 문의",
"입실문의",
"입실사무 담당자 문의",
"입을수있는기간문의",
"입주가능일문의",
"입주시기",
"입지정보문의",
"자동이체 납부 문의",
"자동이체 변경",
"자동이체 신청",
"자동이체 해지",
"자동이체 확인",
"자동차전용도로",
"자리 교체 문의",
"자리 문의",
"자리 변경 문의",
"자리 이동 문의",
"자리 청소 요청",
"자리문의",
"자리묻기",
"자리배치 문의",
"자리여부문의",
"자리이동요청",
"자습문의",
"자체수선가능시간문의",
"자체시험결과에따른재시험문의",
"자체시험문의",
"자체제작여부문의",
"잔금",
"잔금일문의",
"잔돈 요청",
"장갑 요청",
"장기투숙문의",
"장기투숙비용문의",
"장소",
"장소에따른선택문의",
"장소에따른의상문의",
"장작사용여부문의",
"재개발보상",
"재건축계획문의",
"재건축사업추진단계",
"재건축시기",
"재건축아파트정보(세대수층연식위치등)",
"재결제요청",
"재고 문의",
"재고 확인",
"재고문의",
"재고문의후가져오는시간문의",
"재고없을시예약문의",
"재고없을시입고문의",
"재료 구입 문의",
"재료 문의",
"재료 생산년도 문의",
"재료 용량에 따른 떡 수량 문의",
"재료 원산지 문의",
"재료 재고 문의",
"재료 준비상태 문의",
"재료 함량 문의",
"재료문의",
"재료비문의",
"재료선택문의",
"재료소진시 운영 문의",
"재료손질 문의",
"재료의신선도문의",
"재방문",
"재방문 문의",
"재배방식 문의",
"재입고문의",
"재입고수량문의",
"재통화요청",
"저당권설정여부문의",
"저렴한 제품 추천 요청",
"저렴한제품문의",
"적립 문의",
"적립 요청",
"적립 포인트 사용 문의",
"적립금 사용금액문의",
"적립금문의",
"적립금사용요청",
"적립금액/적립률문의",
"적립률 문의",
"적립방법문의",
"적립번호제시",
"적립요청",
"적립카드 문의",
"적립카드 발급 요청",
"적립카드 사용",
"적립쿠폰사용",
"적립쿠폰사용방법문의",
"적립쿠폰지급조건문의",
"적립포인트문의",
"적합 여부 문의",
"전공별수업내용문의",
"전공후진로문의",
"전단지 요청",
"전망문의",
"전망좋은방예약문의",
"전문가 의견 문의",
"전반적인수업시간문의",
"전반적인수업커리큘럼문의",
"전세매물문의",
"전세중개보수문의",
"전업종문의",
"전화 주문 문의",
"전화번호",
"전화번호 문의",
"전화번호/팩스",
"전화번호문의",
"전화주문문의",
"전화포장주문문의",
"점심 시간 문의",
"점심시간",
"젓가락종류문의",
"정산 문의",
"정산여부문의",
"정수기 위치 문의",
"정수기/차 식음 문의",
"정수기/차 식음문의",
"정액권 문의",
"정액권 요청",
"정액권 해지 문의",
"정품보증서문의",
"제공되지않은반찬요청",
"제로페이",
"제로페이 가능 문의",
"제사음식 주문 문의",
"제조 문의",
"제조가능일 문의",
"제조방법",
"제조방식 문의",
"제조사문의",
"제조시간 문의",
"제조시간문의",
"제조일 문의",
"제조지 문의",
"제철 과일나물 문의",
"제철 떡",
"제품 기능 문의",
"제품 문의",
"제품 보관 방법 문의",
"제품 보관법 문의",
"제품 비교",
"제품 사이즈 문의",
"제품 성분 문의",
"제품 손질법 문의",
"제품 수량 문의",
"제품 위치 문의",
"제품 이름 문의",
"제품 이상시 처리 문의",
"제품 자체 생산 확인",
"제품 제조일 문의",
"제품 제조회사 문의",
"제품 제조횟수 문의",
"제품 조리법 문의",
"제품 종류 문의",
"제품 형태에 따른 의약품 요청",
"제품 효과 문의",
"제품가격대문의",
"제품가격문의",
"제품가격불만족",
"제품결함문의",
"제품구매시할인문의",
"제품구성문의",
"제품군의 다른 품목 문의",
"제품군의 다른품목 문의",
"제품길이문의",
"제품디자인문의",
"제품디자인확인",
"제품명문의",
"제품문의",
"제품방문수령",
"제품별 문의",
"제품별모양종류문의",
"제품별색깔종류문의",
"제품별추천문의",
"제품보관방법문의",
"제품부품별도구매가능문의",
"제품비교",
"제품비교문의",
"제품비교선택문의",
"제품사용기간문의",
"제품사용방법문의",
"제품사이즈문의",
"제품상태 확인 요청",
"제품색상 추가 요청",
"제품소재문의",
"제품요청",
"제품용도 문의",
"제품용도문의",
"제품위치문의",
"제품의 사용용도",
"제품의 양 문의",
"제품의세척방법문의",
"제품의세탁방법문의",
"제품입고문의",
"제품재고문의",
"제품재테크가치문의",
"제품정보문의",
"제품제작한나라문의",
"제품종류 문의",
"제품종류 추가 요청",
"제품종류문의",
"제품주문",
"제품주문문의",
"제품주문취소",
"제품차이문의",
"제품착용문의",
"제품추천 요청",
"제품추천문의",
"제품추천요청",
"제품특징문의",
"제품하자발견시문의",
"제품하자발견시요구",
"제품하자에대한컴플레인",
"제품확인요청",
"제형 문의",
"제휴카드추가할인",
"제휴할인 요청",
"제휴할인문의",
"조리 방법 문의",
"조리기한 문의",
"조리방법문의",
"조리상태문의",
"조리상태에대한질문",
"조리상태요청",
"조리요구",
"조미료 사용 문의",
"조식메뉴문의",
"조식문의",
"조식제공문의",
"조식제공시간문의",
"조제시간 문의",
"조합 제품 추천",
"조회",
"종류별 떡 주문",
"종류별 화장품 주문 요청",
"종류별가방제품문의요청",
"종류별신발제품문의요청",
"종류별액세서리제품문의요청",
"종류별의류제품문의요청",
"종류별제품제시요청",
"종이백요구",
"종이컵문의",
"주거형태",
"주고객문의",
"주말상담문의",
"주말영업문의",
"주말프로그램문의",
"주문 떡 찾는 시간 문의",
"주문가능여부",
"주문가능여부문의",
"주문과다른메뉴가전달된상황",
"주문내용변경",
"주문메뉴대기시간문의",
"주문메뉴전달장소요청",
"주문메뉴확인",
"주문방법문의",
"주문변경",
"주문변경 문의",
"주문수량",
"주문시간 문의",
"주문양에대한할인문의",
"주문일 문의",
"주문자지정서빙요청",
"주문정정",
"주문제작문의",
"주문지요청",
"주문취소",
"주변 시설 문의",
"주변개발계획문의",
"주변관광지문의",
"주변상가업종문의",
"주변상권문의",
"주변시설",
"주변시설문의",
"주변식당문의",
"주변에상가학원병원은많은지",
"주변지역 문의",
"주변지역문의",
"주변환경문의",
"주소지로배달요청",
"주소지변경",
"주의사항 문의",
"주차 공간 문의",
"주차 문의",
"주차 요금 문의",
"주차공간 문의",
"주차공간문의",
"주차권 문의",
"주차권 요청",
"주차권요구",
"주차권주차도장요구",
"주차단속문의",
"주차단속여부문의",
"주차도장 요구",
"주차문의",
"주차비문의",
"주차여부문의",
"주차요금 문의",
"주차장 문의",
"주차장 위치 문의",
"주차장문의",
"주차장유무문의",
"주차증 발행 요청",
"주택매물문의",
"준비품목문의",
"줄넘기수업문의",
"줄소재변경문의",
"중개보수문의",
"중개보수세금계산서발행요청",
"중개보수조정문의",
"중개보수할인문의",
"중개보수현금카드법인카드결제문의",
"중개수수료카드결제문의",
"중개수수료현금영수증문의",
"중도금납입횟수,대출유무",
"중복할인문의",
"증상 문의",
"증상별 제품 용법 문의",
"지목(땅의용도)문의",
"지방 비율",
"지인소개할인문의",
"지점유무문의",
"지정석 문의",
"직원 문의",
"직원에게안내요청",
"직원요청",
"직원착용문의",
"진도문의",
"진동벨문의",
"진로문의",
"진로전공문의",
"진로전문반등록시기문의",
"진열되어있지않은상품문의",
"진열상품문의",
"진열위치 문의",
"진열제품문의",
"짐보관문의",
"차량검사",
"차량번호변경",
"차량별노선문의",
"차량지도선생님탑승유무문의",
"차열쇠보관문의",
"착용방법문의",
"착용성별문의",
"착용스타일문의",
"착용여부문의",
"착용여부에따른교환요구",
"착용제품정리요청",
"착용후다른사이즈문의",
"착용후다른사이즈피팅문의",
"착용후다른제품문의",
"착용후맞지않는제품착용방법문의",
"착용후어울리는지문의",
"착화감",
"참고 사진 제시하며 스타일 요청",
"창업의도문의",
"찾는 떡 문의",
"찾는과일 문의",
"찾는채소 문의",
"채광문의",
"채광조절요청",
"책, 잡지, 신문 서비스 가능 문의",
"처리기간",
"처방전 관련 문의",
"처방전 약 구입 요청",
"처방전 약에 대한 문의",
"처방전 외 약품 구입 문의",
"처방전 외 약품 구입 요청",
"청소 요청",
"청소년 이용 시간 문의",
"청소문의",
"체납고지서 문의",
"체인점 문의",
"체인점문의",
"체크아웃시간문의",
"체크인문의",
"체크인시간문의",
"체험학습선생님동반문의",
"체형결점보완요청",
"체형에따른선택문의",
"초크 요청",
"최근 이용자 문의",
"추가 금액 문의",
"추가 떡 결제 요청",
"추가 제조 문의",
"추가결제요청",
"추가매물문의",
"추가매물요청",
"추가반찬요청",
"추가비용 문의",
"추가요금문의",
"추가요청",
"추가인원(베드)문의",
"추가인원(베드)사용료문의",
"추가주문",
"추천 커트 문의",
"추천 펌 문의",
"추천상품 문의",
"추후방문",
"추후연락",
"추후연락요청",
"취급 제품 문의",
"취미반문의",
"취사도구문의",
"취사문의",
"취사재료문의",
"취향에따른메뉴추천요구",
"층문의",
"치수문의",
"치안상태",
"친환경제품문의",
"침구류교체요청",
"침구상태",
"침구추가문의",
"침대수량스타일문의",
"카드 결제 가능 문의",
"카드 결제 문의",
"카드 결제 요청",
"카드 납부",
"카드 할인 문의",
"카드결제",
"카드결제 문의",
"카드결제(카드오류상황)",
"카드결제가능문의",
"카드결제가문의",
"카드결제부가세여부문의",
"카드사 할인 문의",
"카드할부문의",
"카카오페이 가능 문의",
"카카오페이 가능 여부 문의",
"카카오페이/삼성페이",
"카카오페이/삼성페이결제",
"카카오페이삼성페이 문의",
"카카오페이삼성페이결제",
"카탈로그 문의",
"캐리어요구",
"캬드 결제 가능 문의",
"커트 가격 문의",
"커트 가격문의",
"커트 가능 문의",
"커트 문의",
"커트 요청",
"커트 추가 요청",
"커트요청",
"커팅요구",
"커플석 문의",
"커플제품문의",
"컬러 문의",
"컴퓨터 구비 유무 문의",
"컴퓨터 부팅 오류 수정 요청",
"컴퓨터 성능 문의",
"컴플레인",
"컵 요청",
"컵사용문의",
"컵요구",
"케어 후 다음 케어 필요 시기 문의",
"케이스 디자인 문의",
"케이스 재질 문의",
"케이크 디자인 문의",
"케이크 부품 문의",
"케이크 종류 문의",
"케이크 주문일 문의",
"케이크 초 요청",
"케이크 추천 요청",
"케첩소스요구",
"코디상품문의",
"코디액세서리문의",
"코디제품추천문의",
"코스메뉴문의",
"코스메뉴요청",
"콘센트자리문의",
"콩쿠르대비기간문의",
"콩쿠르시기문의",
"쿠폰 문의",
"쿠폰 사용 문의",
"쿠폰 사용 방법 문의",
"쿠폰 사용방법",
"쿠폰 유무 문의",
"쿠폰결제 문의",
"쿠폰결제요청",
"쿠폰멤버십적립문의",
"쿠폰문의",
"쿠폰발급 가능문의",
"쿠폰발급 요청",
"쿠폰발행 문의",
"쿠폰사용",
"쿠폰사용방법문의",
"쿠폰시적립요청",
"쿠폰유효기간문의",
"쿠폰적립문의",
"쿠폰할인문의",
"퀵문의",
"큐대 요청",
"크기 문의",
"클로징시간문의",
"클로징주문시간문의",
"클리닉 가격 문의",
"클리닉 가격문의",
"클리닉 문의",
"클리닉 요청",
"클리닉 효과 문의",
"키보드 오류 수정 요청",
"키오스크사용방법문의",
"킬로당 가격문의",
"탈모 케어 서비스 문의",
"탈모관련 추천 서비스 문의",
"탈모관련 추천 케어 서비스 문의",
"탈색 가격 문의",
"탈색 문의",
"탈색 요청",
"탈색 횟수 문의",
"태권도대학문의",
"태권도학과대학문의",
"택배로주문가능여부",
"택배비 문의",
"택배요청",
"텀블러사용",
"텀블러할인문의",
"테스터 제품",
"테스트프로그램가능여부문의",
"테이블 이용 문의",
"테이블청소요청",
"테이크아웃문의",
"테이크아웃요구",
"테이크아웃용기요청",
"테이크아웃용기추가금액문의",
"테이크아웃할인문의",
"토지관련법정책",
"토지대장토지등기부등본열람요청",
"토지정보요청",
"토핑문의",
"통기성여부에대한문의",
"통신사할인 문의요청",
"퇴실 시간 문의",
"퇴실시간문의",
"투숙객신분확인",
"투숙객혜택",
"투자문의",
"트레이 위치 문의",
"특별프로그램문의",
"특별프로그램시간문의",
"특별프로그램시교육비문의",
"특수목적에맞는제품문의",
"특수부위 문의",
"특수부위 문의주문",
"특이한신발문의",
"특정 게임 자리 문의",
"특정 기기 문의",
"특정 맛/향기 제품 요청",
"특정 브랜드제품",
"특정 성분 약 복용 문의",
"특정 성분 의약품 요청",
"특정 시간 후 재방문 통보",
"특정 제품 문의",
"특정 제품 요청",
"특정 증상 의약품 요청",
"특정 프로그램 설치 문의",
"특정 환자들이 찾는 의약품 요청",
"특정날짜예약문의",
"특정메뉴문의",
"특정메뉴우선요청",
"특정메뉴종류문의",
"특정브랜드스타일추천요청",
"특정사이즈타입문의",
"특정상품문의",
"특정원두요구",
"특정재료 제외 요청",
"특정재료 포함 요청",
"특정재료가포함되는지문의",
"특정재료제외요구",
"특정재료첨삭요구",
"특정재료첨삭요청",
"특정재료추가요구",
"특정제품 문의",
"티머니사용문의",
"티비 다시보기 문의",
"티켓제로 사용 가능 문의",
"파트너 문의",
"파티룸문의",
"판매단위 문의",
"판매단위 입수량 문의",
"판매단위별 가격 문의",
"판매문의",
"판매방식 문의",
"판매상품 문의",
"판매수량 문의",
"판매여부문의",
"판매제품 문의",
"판촉행사종류",
"팜플렛명함요청",
"패키지 가격 문의",
"패키지 문의",
"펌 가격 문의",
"펌 가격문의",
"펌 문의",
"펌 요청",
"펌 후 추후 염색원할 경우 언제가 적당한지 시기 문의",
"펌스타일링 후 관리방법 및 유의사항 문의",
"펌스타일링 후 헤어스타일 유지기간 문의",
"페이 결제 요청",
"편의시설문의",
"편의시설이용시간문의",
"편의용품구비여부문의",
"폄 문의",
"평일 오픈 시간",
"포인트 결제",
"포인트 사용 문의",
"포인트 적립 카드 문의",
"포인트적립 가능 문의",
"포인트적립 가능문의",
"포인트적립문의",
"포인트충전",
"포장 문의",
"포장 상품 구매 확인",
"포장 요청",
"포장 용기 문의",
"포장가격문의",
"포장가능금액문의",
"포장대기시간문의",
"포장된 떡 문의",
"포장메뉴문의",
"포장문의",
"포장박스 용량 문의",
"포장방법 문의",
"포장방법 요청",
"포장백 용량 문의",
"포장변경문의",
"포장비문의",
"포장시할인문의",
"포장용기문의",
"포장용기선택",
"포장유형",
"포장주문",
"포장주문문의",
"포장할인문의",
"포켓볼 규칙 문의",
"포켓볼 테이블 문의",
"품목별 보관방법문의",
"품목별 브랜드 문의",
"품목별 세일기간 문의",
"품새별시간표문의",
"프랜차이즈 유무 질문",
"프로그램 설치 문의",
"프로그램문의",
"프로그램반응문의",
"프로그램추천요청",
"프린트 이용 문의",
"피부 타입별 제품",
"피아노과정배우는기간",
"피아노미달문의",
"피아노수문의",
"피아노전공진로문의",
"피팅룸사용규칙문의",
"피팅룸위치문의",
"피팅문의",
"피팅사이즈주문",
"픽업문의",
"학교는가까운지",
"학교별피아노외전공문의",
"학교선호도문의",
"학교실내화문의",
"학교진로문의",
"학군은어떤지",
"학년별반수문의",
"학년별시간표문의",
"학년별운영과목문의",
"학생 흡연 시 대처 방안 문의",
"학생, 일반실 분리룸 유무 문의",
"학생구성문의",
"학생들과의트러블있을경우별도관리유무문의",
"학생수준에맞는수강문의",
"학생증 지참 문의",
"학원교실수문의",
"학원교육대상문의",
"학원교재문의",
"학원대학진학문의",
"학원레벨테스트문의",
"학원방학,휴일문의",
"학원방학휴일문의",
"학원분위기문의",
"학원브랜드문의",
"학원운영기간문의",
"학원위치문의",
"학원전화번호문의",
"학원차량가능문의",
"학원특색문의",
"학원확인",
"한정판구매가능수량문의",
"한정판문의",
"할부 문의",
"할부결제문의",
"할부결제요청",
"할부문의요청",
"할인 가능한 카드 문의",
"할인 문의",
"할인 및 이벤트 문의",
"할인 여부 문의",
"할인 유무 문의",
"할인 혹은 적립 카드 여부 문의",
"할인가격문의",
"할인가판매 시간",
"할인률 문의",
"할인메뉴문의",
"할인문의",
"할인방법문의",
"할인여부문의",
"할인요청",
"할인율 문의",
"할인율문의",
"할인이벤트문의",
"할인이유문의",
"할인정보문의",
"할인제품 문의",
"할인카드 문의",
"할인카드문의",
"할인카드적용요청",
"할인코너문의",
"할인쿠폰/적립카드문의",
"할인쿠폰사용",
"할인쿠폰사용문의",
"할인품목 문의",
"할인행사기간",
"할인행사기간문의",
"할인행사여부",
"합기도와태권도차이점문의",
"합의문의",
"핸드폰번호이용적립",
"핸드폰충전요청",
"행사공간문의",
"행사문의",
"행사이름문의",
"행사제품 문의",
"향 문의",
"향수제품",
"허가된층수문의",
"허위매물문의",
"헤드폰 교체 요청",
"헤어 관련된 기구 구입 가능여부 문의",
"헤어 길이별 스타일 추천 요청",
"헤어 스타일 문의",
"헤어 제품 문의",
"헤어스타일 문의",
"헤어스타일 요청",
"헤어스타일 지속기간 관련 문의",
"현금 결제 가능 문의",
"현금 결제 문의",
"현금 결제 요청",
"현금 고객 할인 여부 문의",
"현금 할인 여부 문의",
"현금가할인요청",
"현금결제",
"현금결제 문의",
"현금결제할인문의",
"현금영수증",
"현금영수증 발급 문의",
"현금영수증 발급 요청",
"현금영수증 발급가능문의",
"현금영수증 발급문의",
"현금영수증 발급문의요청",
"현금영수증 요청",
"현금영수증문의",
"현금영수증발급가능문의",
"현금영수증발행요청",
"현금영수증요청",
"현금인출기 문의",
"현금할인가문의",
"현금할인문의",
"현금할인여부문의",
"현업종문의",
"현장 복용 문의",
"현재 본인 스타일의 염색 색상 추천 요청",
"현재 본인 헤어색상에서 가능한 염색 색상 문의",
"협회단체문의",
"협회단체소속문의",
"형제할인가능유무문의",
"호텔內편의시설문의",
"호텔서비스요청",
"호텔정보문의",
"홀/배달가격차이문의",
"홀/포장가격차이문의",
"홀가격과배달가격차이문의",
"홀배달가격차이문의",
"홀식사문의",
"홀포장가격차이문의",
"홈 케어 방법 문의",
"홈페이지 문의",
"홈페이지관련문의",
"화장시피팅문의",
"화장실 문의",
"화장실 위치 문의",
"화장실 이용 문의",
"화장실관련요청사항",
"화장실문의",
"화장실비밀번호문의",
"화장실위치문의",
"화장실이용문의",
"화장실휴지비치문의",
"환불 문의",
"환불규정문의",
"환불문의",
"환불반품교환문의",
"환불반품교환요청",
"환불이나 연장 가능 여부 문의",
"회수 문의",
"회원 가입 가능 문의",
"회원 정보 분실",
"회원 혜택",
"회원가입 문의",
"회원가입 요청",
"회원등록요청",
"회원제 문의",
"회원카드 통합 문의",
"회원할인문의",
"회원혜택 문의",
"효과 문의",
"효과 비교",
"효과별 제품 추천",
"후불카드 사용 문의",
"후식문의",
"후식요구",
"훼손 문의",
"휴게실 유무 문의",
"휴게실 이용 관련 문의",
"휴게실유무문의",
"휴대폰반입문의",
"휴무일 문의",
"휴일 문의",
"휴일문의",
"휴지요구",
"휴지통위치문의",
"흡연 가능 문의",
"흡연공간문의",
"흡연석 문의",
"흡연실 문의",
"흡연실 위치 문의",
"희망매매가매물문의",
"희망상품 문의",
"희망이사날짜"
] | Entry not found | 15 |
tcaputi/guns-relevant | null | Entry not found | 15 |
tennessejoyce/titlewave-bert-base-uncased | [
"Unanswered",
"Answered"
] | ---
language: en
license: cc-by-4.0
widget:
- text: "[Gmail API] How can I extract plain text from an email sent to me?"
---
# Titlewave: bert-base-uncased
## Model description
Titlewave is a Chrome extension that helps you choose better titles for your Stack Overflow questions. See the [github repository](https://github.com/tennessejoyce/TitleWave) for more information.
This is one of two NLP models used in the Titlewave project, and its purpose is to classify whether question will be answered or not just based on the title. The [companion model](https://huggingface.co/tennessejoyce/titlewave-t5-small) suggests a new title based on on the body of the question.
## Intended use
Try out different titles for your Stack Overflow post, and see which one gives you the best chance of receiving an answer.
You can use the model through the API on this page (hosted by HuggingFace) or install the Chrome extension by following the instructions on the [github repository](https://github.com/tennessejoyce/TitleWave), which integrates the tool directly into the Stack Overflow website.
You can also run the model locally in Python like this (which automatically downloads the model to your machine):
```python
>>> from transformers import pipeline
>>> classifier = pipeline('sentiment-analysis', model='tennessejoyce/titlewave-bert-base-uncased')
>>> classifier('[Gmail API] How can I extract plain text from an email sent to me?')
[{'label': 'Answered', 'score': 0.8053370714187622}]
```
The 'score' in the output represents the probability of getting an answer with this title: 80.5%.
## Training data
The weights were initialized from the [BERT base model](https://huggingface.co/bert-base-uncased), which was trained on BookCorpus and English Wikipedia.
Then the model was fine-tuned on the dataset of previous Stack Overflow post titles, which is publicly available [here](https://archive.org/details/stackexchange).
Specifically I used three years of posts from 2017-2019, filtered out posts which were closed (e.g., duplicates, off-topic), and selected 5% of the remaining posts at random to use in the training set, and the same amount for validation and test sets (278,155 posts each).
## Training procedure
The model was fine-tuned for two epochs with a batch size of 32 (17,384 steps total) using 16-bit mixed precision.
After some hyperparameter tuning, I found that the following two-phase training procedure yields the best performance (ROC-AUC score) on the validation set:
* In the first epoch, all layers were frozen except for the last two (pooling layer and classification layer) and a learning rate of 3e-4 was used.
* In the second epoch all layers were unfrozen, and the learning rate was decreased by a factor of 10 to 3e-5.
Otherwise, all parameters we set to the defaults listed [here](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments),
including the AdamW optimizer and a linearly decreasing learning schedule (both of which were reset between the two epochs). See the [github repository](https://github.com/tennessejoyce/TitleWave) for the scripts that were used to train the model.
## Evaluation
See [this notebook](https://github.com/tennessejoyce/TitleWave/blob/master/model_training/test_classifier.ipynb) for the performance of the title classification model on the test set.
| 3,369 |
TehranNLP-org/bert-large-sst2 | [
"negative",
"positive"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: SST2
type: ''
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.5091743119266054
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- Accuracy: 0.5092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: not_parallel
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2104 | 0.7985 | 0.5092 |
| 0.481 | 2.0 | 4208 | 0.7191 | 0.5092 |
| 0.7017 | 3.0 | 6312 | 0.6996 | 0.5092 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
| 1,796 |
cradle-bio/tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert | [
"LABEL_0"
] | ---
license: apache-2.0
tags:
- protein language model
- generated_from_trainer
datasets:
- train
metrics:
- spearmanr
model-index:
- name: tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: cradle-bio/tape-fluorescence
type: train
metrics:
- name: Spearmanr
type: spearmanr
value: 0.5505486770316164
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
This model is a fine-tuned version of [thundaa/tape-fluorescence-evotuning-DistilProtBert](https://huggingface.co/thundaa/tape-fluorescence-evotuning-DistilProtBert) on the cradle-bio/tape-fluorescence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3377
- Spearmanr: 0.5505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 2560
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 6.2764 | 0.93 | 7 | 1.9927 | -0.0786 |
| 1.1206 | 1.93 | 14 | 0.8223 | -0.1543 |
| 0.8054 | 2.93 | 21 | 0.6894 | 0.2050 |
| 0.7692 | 3.93 | 28 | 0.8084 | 0.2807 |
| 0.7597 | 4.93 | 35 | 0.6613 | 0.4003 |
| 0.7416 | 5.93 | 42 | 0.6803 | 0.3829 |
| 0.7256 | 6.93 | 49 | 0.6428 | 0.4416 |
| 0.6966 | 7.93 | 56 | 0.6086 | 0.4506 |
| 0.7603 | 8.93 | 63 | 0.9119 | 0.4697 |
| 0.9187 | 9.93 | 70 | 0.6048 | 0.4757 |
| 1.0371 | 10.93 | 77 | 2.0742 | 0.4076 |
| 1.0947 | 11.93 | 84 | 0.6633 | 0.4522 |
| 0.6946 | 12.93 | 91 | 0.6008 | 0.4123 |
| 0.6618 | 13.93 | 98 | 0.5931 | 0.4457 |
| 0.8635 | 14.93 | 105 | 1.9561 | 0.4331 |
| 0.9444 | 15.93 | 112 | 0.5627 | 0.5041 |
| 0.5535 | 16.93 | 119 | 0.4348 | 0.4840 |
| 0.9059 | 17.93 | 126 | 0.6704 | 0.5123 |
| 0.5693 | 18.93 | 133 | 0.4616 | 0.5285 |
| 0.6298 | 19.93 | 140 | 0.6915 | 0.5166 |
| 0.955 | 20.93 | 147 | 0.6679 | 0.5677 |
| 0.7866 | 21.93 | 154 | 0.8136 | 0.5559 |
| 0.6687 | 22.93 | 161 | 0.4782 | 0.5561 |
| 0.5336 | 23.93 | 168 | 0.4447 | 0.5499 |
| 0.4673 | 24.93 | 175 | 0.4258 | 0.5428 |
| 0.478 | 25.93 | 182 | 0.3651 | 0.5329 |
| 0.4023 | 26.93 | 189 | 0.3688 | 0.5428 |
| 0.3961 | 27.93 | 196 | 0.3692 | 0.5509 |
| 0.3808 | 28.93 | 203 | 0.3434 | 0.5514 |
| 0.3433 | 29.93 | 210 | 0.3377 | 0.5505 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 3,730 |
abspython/distilbert-finetuned | [
"NEGATIVE",
"POSITIVE"
] | ---
language: en
license: other
---
TDistilBERT finetuned
This model is a fine-tune checkpoint of DistilBERT-base-uncased[https://huggingface.co/distilbert-base-uncased]
| 170 |
facebook/roberta-hate-speech-dynabench-r1-target | null | ---
language: en
---
# LFTW R1 Target
The R1 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub! | 570 |
hassan4830/distil-bert-uncased-finetuned-english | null | ---
license: afl-3.0
---
distilbert Binary Text Classifier
This distilbert based text classification model trained on imdb dataset performs binary sentiment classification on any given sentence.
The model has been fine tuned for better results in manageable time frames.
LABEL0 - Negative
LABEL1 - Positive | 309 |
erickdp/gs3n-roberta-model | [
"0",
"1",
"2"
] | ---
tags: xerox
language: es
widget:
- text: "Debo de levantarme temprano para hacer ejercicio"
datasets:
- erixxdp/autotrain-data-gsemodel
co2_eq_emissions: 0.027846282970913613
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1148842296
- CO2 Emissions (in grams): 0.027846282970913613
## Validation Metrics
- Loss: 0.4816772937774658
- Accuracy: 0.864
- Macro F1: 0.865050349743783
- Micro F1: 0.864
- Weighted F1: 0.865050349743783
- Macro Precision: 0.8706266090178479
- Micro Precision: 0.864
- Weighted Precision: 0.8706266090178482
- Macro Recall: 0.864
- Micro Recall: 0.864
- Weighted Recall: 0.864
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/erixxdp/autotrain-gsemodel-1148842296
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("erixxdp/autotrain-gsemodel-1148842296", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("erixxdp/autotrain-gsemodel-1148842296", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,338 |
adamnik/electra-entailment-detection | null | ---
license: mit
---
| 21 |
LilaBoualili/bert-pre-pair | null | Entry not found | 15 |
Sakil/imdbsentdistilbertmodel | null | ---
language:
- en
tags:
- text Classification
license: apache-2.0
widget:
- text: "I like you. </s></s> I love you."
---
* IMDBSentimentDistilBertModel:
- I have used IMDB movie review dataset to create custom model by using DistilBertForSequenceClassification.
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
model = DistilBertForSequenceClassification.from_pretrained('./imdbsentdistilbertmodel')
| 448 |
addy88/programming-lang-identifier | [
"go",
"java",
"javascript",
"php",
"python",
"ruby"
] | This model is funetune version of Codebert in roberta. On CodeSearchNet.
###
Quick start:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("addy88/programming-lang-identifier")
model = AutoModelForSequenceClassification.from_pretrained("addy88/programming-lang-identifier")
input_ids = tokenizer.encode(CODE_TO_IDENTIFY)
logits = model(input_ids)[0]
language_idx = logits.argmax() # index for the resulting label
### | 489 |
bergum/xtremedistil-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: xtremedistil-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
---
# xtremedistil-emotion
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9265
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- num_epochs: 24
### Training results
<pre>
Epoch Training Loss Validation Loss Accuracy
1 No log 1.238589 0.609000
2 No log 0.934423 0.714000
3 No log 0.768701 0.742000
4 1.074800 0.638208 0.805500
5 1.074800 0.551363 0.851500
6 1.074800 0.476291 0.875500
7 1.074800 0.427313 0.883500
8 0.531500 0.392633 0.886000
9 0.531500 0.357979 0.892000
10 0.531500 0.330304 0.899500
11 0.531500 0.304529 0.907000
12 0.337200 0.287447 0.918000
13 0.337200 0.277067 0.921000
14 0.337200 0.259483 0.921000
15 0.337200 0.257564 0.916500
16 0.246200 0.241970 0.919500
17 0.246200 0.241537 0.921500
18 0.246200 0.235705 0.924500
19 0.246200 0.237325 0.920500
20 0.201400 0.229699 0.923500
21 0.201400 0.227426 0.923000
22 0.201400 0.228554 0.924000
23 0.201400 0.226941 0.925500
24 0.184300 0.225816 0.926500
</pre>
| 1,609 |
boychaboy/SNLI_distilroberta-base | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
dkhara/bert-news | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ### Bert-News | 13 |
valurank/distilbert-quality | [
"bad",
"good",
"medium"
] | ---
license: other
language: en
datasets:
- valurank/news-small
---
# DistilBERT fine-tuned for news classification
This model is based on [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) pretrained weights, with a classification head fine-tuned to classify news articles into 3 categories (bad, medium, good).
## Training data
The dataset used to fine-tune the model is [news-small](https://huggingface.co/datasets/valurank/news-small), the 300 article news dataset manually annotated by Alex.
## Inputs
Similar to its base model, this model accepts inputs with a maximum length of 512 tokens.
| 626 |
vslaykovsky/roberta-news-duplicates | null | Entry not found | 15 |
vumichien/sequence-classification-bigbird-roberta-base | null | Entry not found | 15 |
Yoonseong/climatebert_trained | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: mit
---
| 24 |
Jatin-WIAI/doctor_patient_clf_en | null | Entry not found | 15 |
UT/BRTW_DEBIAS_SHORT | null | Entry not found | 15 |
sam34738/bert-hindi-kabita | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-hindi-kabita
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-hindi-kabita
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1956 | 1.0 | 460 | 0.5352 |
| 0.4796 | 2.0 | 920 | 0.4795 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
| 1,347 |
Raychanan/chinese-roberta-wwm-ext-FineTuned-Binary | null | DO NOT USE THIS | 15 |
gchhablani/bert-base-cased-finetuned-qnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9099395936298736
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-qnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3986
- Accuracy: 0.9099
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name qnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-qnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.337 | 1.0 | 6547 | 0.9013 | 0.2448 |
| 0.1971 | 2.0 | 13094 | 0.9143 | 0.2839 |
| 0.1175 | 3.0 | 19641 | 0.9099 | 0.3986 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| 2,647 |
shrugging-grace/tweetclassifier | null | # shrugging-grace/tweetclassifier
## Model description
This model classifies tweets as either relating to the Covid-19 pandemic or not.
## Intended uses & limitations
It is intended to be used on tweets commenting on UK politics, in particular those trending with the #PMQs hashtag, as this refers to weekly Prime Ministers' Questions.
#### How to use
``LABEL_0`` means that the tweet relates to Covid-19
``LABEL_1`` means that the tweet does not relate to Covid-19
## Training data
The model was trained on 1000 tweets (with the "#PMQs'), which were manually labeled by the author. The tweets were collected between May-July 2020.
### BibTeX entry and citation info
This was based on a pretrained version of BERT.
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
| 992 |
textattack/xlnet-base-cased-QQP | null | Entry not found | 15 |
veronica320/TE-for-Event-Extraction | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | # TE-for-Event-Extraction
## Model description
This is a TE model as part of the event extraction system in the ACL2021 paper: [Zero-shot Event Extraction via Transfer Learning: Challenges and Insights](https://aclanthology.org/2021.acl-short.42/). The pretrained architecture is [roberta-large](https://huggingface.co/roberta-large) and the fine-tuning data is [MNLI](https://cims.nyu.edu/~sbowman/multinli/).
The label mapping is:
```
LABEL_0: Contradiction
LABEL_1: Neutral
LABEL_2: Entailment
```
## Demo
To see how the model works, type a sentence and a hypothesis separated by "\<\/s\>\<\/s\>" in the right-hand-side textbox under "Hosted inference API".
Example:
- Input:
```
A car bomb exploded Thursday in a crowded outdoor market in the heart of Jerusalem. </s></s> This text is about an attack.
```
- Output:
```
LABEL_2 (Entailment)
```
## Usage
- To use the TE model independently, follow the [huggingface documentation on AutoModelForSequenceClassification](https://huggingface.co/transformers/task_summary.html#sequence-classification).
- To use it as part of the event extraction system, please check out [our Github repo](https://github.com/veronica320/Zeroshot-Event-Extraction).
### BibTeX entry and citation info
```
@inproceedings{lyu-etal-2021-zero,
title = "Zero-shot Event Extraction via Transfer Learning: {C}hallenges and Insights",
author = "Lyu, Qing and
Zhang, Hongming and
Sulem, Elior and
Roth, Dan",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-short.42",
doi = "10.18653/v1/2021.acl-short.42",
pages = "322--332",
abstract = "Event extraction has long been a challenging task, addressed mostly with supervised methods that require expensive annotation and are not extensible to new event ontologies. In this work, we explore the possibility of zero-shot event extraction by formulating it as a set of Textual Entailment (TE) and/or Question Answering (QA) queries (e.g. {``}A city was attacked{''} entails {``}There is an attack{''}), exploiting pretrained TE/QA models for direct transfer. On ACE-2005 and ERE, our system achieves acceptable results, yet there is still a large gap from supervised approaches, showing that current QA and TE technologies fail in transferring to a different domain. To investigate the reasons behind the gap, we analyze the remaining key challenges, their respective impact, and possible improvement directions.",
}
``` | 2,764 |
akoksal/bounti | [
"negative",
"neutral",
"positive"
] | ---
language: "tr"
tags:
- sentiment
- twitter
- turkish
---
This Turkish Sentiment Analysis model is a fine-tuned checkpoint of pretrained [BERTurk model 128k uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) with [BounTi dataset](https://ieeexplore.ieee.org/document/9477814).
## Usage in Hugging Face Pipeline
```
from transformers import pipeline
bounti = pipeline("sentiment-analysis",model="akoksal/bounti")
print(bounti("Bu yemeği pek sevmedim"))
>> [{'label': 'negative', 'score': 0.8012508153915405}]
```
## Results
The scores of the finetuned model with BERTurk:
||Accuracy|Precision|Recall|F1|
|-------------|:---------:|:---------:|:------:|:-----:|
|Validation|0.745|0.706|0.730|0.715|
|Test|0.723|0.692|0.729|0.701|
## Dataset
You can find the dataset in [our Github repo](https://github.com/boun-tabi/BounTi-Turkish-Sentiment-Analysis) with the training, validation, and test splits.
Due to Twitter copyright, we cannot release the full text of the tweets. We share the tweet IDs, and the full text can be downloaded through official Twitter API.
| | Training | Validation | Test |
|----------|:--------:|:----------:|:----:|
| Positive | 1691 | 188 | 469 |
| Neutral | 3034 | 338 | 843 |
| Negative | 1008 | 113 | 280 |
| Total | 5733 | 639 | 1592 |
## Citation
You can cite the following paper if you use our work:
```
@INPROCEEDINGS{BounTi,
author={Köksal, Abdullatif and Özgür, Arzucan},
booktitle={2021 29th Signal Processing and Communications Applications Conference (SIU)},
title={Twitter Dataset and Evaluation of Transformers for Turkish Sentiment Analysis},
year={2021},
volume={},
number={}
}
```
---
| 1,733 |
tinkoff-ai/response-toxicity-classifier-base | [
"ok",
"risks",
"severe_toxic",
"toxic"
] | ---
language: ["ru"]
tags:
- russian
- pretraining
- conversational
license: mit
widget:
- text: "[CLS] привет [SEP] привет! [SEP] как дела? [RESPONSE_TOKEN] норм"
example_title: "Dialog example 1"
- text: "[CLS] привет [SEP] привет! [SEP] как дела? [RESPONSE_TOKEN] ты *****"
example_title: "Dialog example 2"
---
# response-toxicity-classifier-base
[BERT classifier from Skoltech](https://huggingface.co/Skoltech/russian-inappropriate-messages), finetuned on contextual data with 4 labels.
# Training
[*Skoltech/russian-inappropriate-messages*](https://huggingface.co/Skoltech/russian-inappropriate-messages) was finetuned on a multiclass data with four classes (*check the exact mapping between idx and label in* `model.config`).
1) OK label — the message is OK in context and does not intent to offend or somehow harm the reputation of a speaker.
2) Toxic label — the message might be seen as a offensive one in given context.
3) Severe toxic label — the message is offencive, full of anger and was written to provoke a fight or any other discomfort
4) Risks label — the message touches on sensitive topics and can harm the reputation of the speaker (i.e. religion, politics)
The model was finetuned on a soon-to-be-posted dialogs datasets.
# Evaluation results
Model achieves the following results on the validation datasets (will be posted soon):
|| OK - F1-score | TOXIC - F1-score | SEVERE TOXIC - F1-score | RISKS - F1-score |
|---------|---------------|------------------|-------------------------|------------------|
|internet dialogs | 0.896 | 0.348 | 0.490 | 0.591 |
|chatbot dialogs | 0.940 | 0.295 | 0.729 | 0.46 |
# Use in transformers
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-toxicity-classifier-base')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-toxicity-classifier-base')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.softmax(logits, dim=-1)[0].cpu().detach().numpy()
```
The work was done during internship at Tinkoff by [Nikita Stepanov](https://huggingface.co/nikitast).
| 2,464 |
BellaAndBria/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9425
- name: F1
type: f1
value: 0.942387859809443
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1611
- Accuracy: 0.9425
- F1: 0.9424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1358 | 1.0 | 250 | 0.1765 | 0.9345 | 0.9340 |
| 0.0885 | 2.0 | 500 | 0.1588 | 0.937 | 0.9371 |
| 0.0727 | 3.0 | 750 | 0.1611 | 0.9425 | 0.9424 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,878 |
svenstahlmann/finetuned-distilbert-needmining | [
"contains need",
"no need"
] | ---
language: en
tags:
- distilbert
- needmining
license: apache-2.0
metric:
- f1
---
# Finetuned-Distilbert-needmining (uncased)
This model is a finetuned version of the [Distilbert base model](https://huggingface.co/distilbert-base-uncased). It was
trained to predict need-containing sentences from amazon product reviews.
## Model description
This mode is part of ongoing research, after the publication of the research more information will be added.
## Intended uses & limitations
You can use this model to identify sentences that contain customer needs in user-generated content. This can act as a filtering process to remove uninformative content for market research.
### How to use
You can use this model directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="svenstahlmann/finetuned-distilbert-needmining")
>>> classifier("the plasic feels super cheap.")
[{'label': 'contains need', 'score': 0.9397542476654053}]
```
### Limitations and bias
We are not aware of any bias in the training data.
## Training data
The training was done on a dataset of 6400 sentences. The sentences were taken from product reviews off amazon and coded if they express customer needs.
## Training procedure
For the training, we used [Population Based Training (PBT)](https://www.deepmind.com/blog/population-based-training-of-neural-networks) and optimized for f1 score on a validation set of 1600 sentences.
### Preprocessing
The preprocessing follows the [Distilbert base model](https://huggingface.co/distilbert-base-uncased).
### Pretraining
The model was trained on a titan RTX for 1 hour.
## Evaluation results
Results on the validation set:
| F1 |
|:----:|
| 76.0 |
### BibTeX entry and citation info
coming soon | 1,837 |
CenIA/distillbert-base-spanish-uncased-finetuned-xnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Greg1901/BertSummaDev_AFD | null | Entry not found | 15 |
Maelstrom77/bert-base-uncased-snli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ```
for i in range(len(predictions)):
if predictions[i] == 0:
predictions[i] = 2
elif predictions[i] == 1:
predictions[i] = 0
elif predictions[i] == 2:
predictions[i] = 1
``` | 192 |
NDugar/deberta-v2-xlarge-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
I tried to train v3 xl to mnli using my own training code and got this result. | 277 |
TransQuest/monotransquest-da-en_de-wiki | [
"LABEL_0"
] | ---
language: en-de
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-en_de-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 5,401 |
ainize/klue-bert-base-re | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | # bert-base for KLUE Relation Extraction task.
Fine-tuned klue/bert-base using KLUE RE dataset.
- <a href="https://klue-benchmark.com/">KLUE Benchmark Official Webpage</a>
- <a href="https://github.com/KLUE-benchmark/KLUE">KLUE Official Github</a>
- <a href="https://github.com/ainize-team/klue-re-workspace">KLUE RE Github</a>
- Run KLUE RE on free GPU : <a href="https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ainize-team/klue-re-workspace">Ainize Workspace</a>
<br>
# Usage
<pre><code>
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ainize/klue-bert-base-re")
model = AutoModelForSequenceClassification.from_pretrained("ainize/klue-bert-base-re")
# Add "<subj>", "</subj>" to both ends of the subject object and "<obj>", "</obj>" to both ends of the object object.
sentence = "<subj>손흥민</subj>은 <obj>대한민국</obj>에서 태어났다."
encodings = tokenizer(sentence,
max_length=128,
truncation=True,
padding="max_length",
return_tensors="pt")
outputs = model(**encodings)
logits = outputs['logits']
preds = torch.argmax(logits, dim=1)
</code></pre>
<br>
# About us
- <a href="https://ainize.ai/teachable-nlp">Teachable NLP</a> - Train NLP models with your own text without writing any code
- <a href="https://ainize.ai/">Ainize</a> - Deploy ML project using free gpu | 1,500 |
berkergurcay/1k-pretrained-bert-model | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
bgoel4132/twitter-sentiment | [
"cyclone",
"earthquake",
"explosion",
"fire",
"flood",
"hurricane",
"medical",
"pollution",
"tornado",
"typhoon",
"volcano"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bgoel4132/autonlp-data-twitter-sentiment
co2_eq_emissions: 186.8637425115097
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 35868888
- CO2 Emissions (in grams): 186.8637425115097
## Validation Metrics
- Loss: 0.2020547091960907
- Accuracy: 0.9233253193796257
- Macro F1: 0.9240407542958707
- Micro F1: 0.9233253193796257
- Weighted F1: 0.921800586774046
- Macro Precision: 0.9432284179846658
- Micro Precision: 0.9233253193796257
- Weighted Precision: 0.9247263361914827
- Macro Recall: 0.9139437626409382
- Micro Recall: 0.9233253193796257
- Weighted Recall: 0.9233253193796257
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bgoel4132/autonlp-twitter-sentiment-35868888
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bgoel4132/autonlp-twitter-sentiment-35868888", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bgoel4132/autonlp-twitter-sentiment-35868888", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,403 |
chrommium/sbert_large-finetuned-sent_in_news_sents | [
"LABEL_-3",
"LABEL_-2",
"LABEL_-1",
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sbert_large-finetuned-sent_in_news_sents
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7056
- Accuracy: 0.7301
- F1: 0.5210
## Model examples
Model responds to label X in news text. For exaple:
For 'Газпром отозвал лицензию у X, сообщает Финам' the model will return negative label -3
For 'X отозвал лицензию у Сбербанка, сообщает Финам' the model will return neutral label 0
For 'Газпром отозвал лицензию у Сбербанка, сообщает X' the model will return neutral label 0
For 'X демонстрирует высокую прибыль, сообщает Финам' the model will return positive label 1
## Simple example of News preprocessing for Russian before BERT
```
from natasha import (
Segmenter,
MorphVocab,
NewsEmbedding,
NewsMorphTagger,
NewsSyntaxParser,
NewsNERTagger,
PER,
NamesExtractor,
Doc
)
segmenter = Segmenter()
emb = NewsEmbedding()
morph_tagger = NewsMorphTagger(emb)
syntax_parser = NewsSyntaxParser(emb)
morph_vocab = MorphVocab()
### ----------------------------- key sentences block -----------------------------
def find_synax_tokens_with_order(doc, start, tokens, text_arr, full_str):
''' Находит все синтаксические токены, соответствующие заданному набору простых токенов (найденные
для определенной NER другими функциями).
Возвращает словарь найденных синтаксических токенов (ключ - идентификатор токена, состоящий
из номера предложения и номера токена внутри предложения).
Начинает поиск с указанной позиции в списке синтаксических токенов, дополнительно возвращает
позицию остановки, с которой нужно продолжить поиск следующей NER.
'''
found = []
in_str = False
str_candidate = ''
str_counter = 0
if len(text_arr) == 0:
return [], start
for i in range(start, len(doc.syntax.tokens)):
t = doc.syntax.tokens[i]
if in_str:
str_counter += 1
if str_counter < len(text_arr) and t.text == text_arr[str_counter]:
str_candidate += t.text
found.append(t)
if str_candidate == full_str:
return found, i+1
else:
in_str = False
str_candidate = ''
str_counter = 0
found = []
if t.text == text_arr[0]:
found.append(t)
str_candidate = t.text
if str_candidate == full_str:
return found, i+1
in_str = True
return [], len(doc.syntax.tokens)
def find_tokens_in_diap_with_order(doc, start_token, diap):
''' Находит все простые токены (без синтаксической информации), которые попадают в
указанный диапазон. Эти диапазоны мы получаем из разметки NER.
Возвращает набор найденных токенов и в виде массива токенов, и в виде массива строчек.
Начинает поиск с указанной позиции в строке и дополнительно возвращает позицию остановки.
'''
found_tokens = []
found_text = []
full_str = ''
next_i = 0
for i in range(start_token, len(doc.tokens)):
t = doc.tokens[i]
if t.start > diap[-1]:
next_i = i
break
if t.start in diap:
found_tokens.append(t)
found_text.append(t.text)
full_str += t.text
return found_tokens, found_text, full_str, next_i
def add_found_arr_to_dict(found, dict_dest):
for synt in found:
dict_dest.update({synt.id: synt})
return dict_dest
def make_all_syntax_dict(doc):
all_syntax = {}
for synt in doc.syntax.tokens:
all_syntax.update({synt.id: synt})
return all_syntax
def is_consiquent(id_1, id_2):
''' Проверяет идут ли токены друг за другом без промежутка по ключам. '''
id_1_list = id_1.split('_')
id_2_list = id_2.split('_')
if id_1_list[0] != id_2_list[0]:
return False
return int(id_1_list[1]) + 1 == int(id_2_list[1])
def replace_found_to(found, x_str):
''' Заменяет последовательность токенов NER на «заглушку». '''
prev_id = '0_0'
for synt in found:
if is_consiquent(prev_id, synt.id):
synt.text = ''
else:
synt.text = x_str
prev_id = synt.id
def analyze_doc(text):
''' Запускает Natasha для анализа документа. '''
doc = Doc(text)
doc.segment(segmenter)
doc.tag_morph(morph_tagger)
doc.parse_syntax(syntax_parser)
ner_tagger = NewsNERTagger(emb)
doc.tag_ner(ner_tagger)
return doc
def find_non_sym_syntax_short(entity_name, doc, add_X=False, x_str='X'):
''' Отыскивает заданную сущность в тексте, среди всех NER (возможно, в другой грамматической форме).
entity_name - сущность, которую ищем;
doc - документ, в котором сделан препроцессинг Natasha;
add_X - сделать ли замену сущности на «заглушку»;
x_str - текст замены.
Возвращает:
all_found_syntax - словарь всех подходящих токенов образующих искомые сущности, в котором
в случае надобности произведена замена NER на «заглушку»;
all_syntax - словарь всех токенов.
'''
all_found_syntax = {}
current_synt_number = 0
current_tok_number = 0
# идем по всем найденным NER
for span in doc.spans:
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
diap = range(span.start, span.stop)
# создаем словарь всех синтаксических элементов (ключ -- id из номера предложения и номера внутри предложения)
all_syntax = make_all_syntax_dict(doc)
# находим все простые токены внутри NER
found_tokens, found_text, full_str, current_tok_number = find_tokens_in_diap_with_order(doc, current_tok_number,
diap)
# по найденным простым токенам находим все синтаксические токены внутри данного NER
found, current_synt_number = find_synax_tokens_with_order(doc, current_synt_number, found_tokens, found_text,
full_str)
# если текст NER совпадает с указанной сущностью, то делаем замену
if entity_name.find(span.normal) >= 0 or span.normal.find(entity_name) >= 0:
if add_X:
replace_found_to(found, x_str)
all_found_syntax = add_found_arr_to_dict(found, all_found_syntax)
return all_found_syntax, all_syntax
def key_sentences(all_found_syntax):
''' Находит номера предложений с искомой NER. '''
key_sent_numb = {}
for synt in all_found_syntax.keys():
key_sent_numb.update({synt.split('_')[0]: 1})
return key_sent_numb
def openinig_punct(x):
opennings = ['«', '(']
return x in opennings
def key_sentences_str(entitiy_name, doc, add_X=False, x_str='X', return_all=True):
''' Составляет окончательный текст, в котором есть только предложения, где есть ключевая сущность,
эта сущность, если указано, заменяется на «заглушку».
'''
all_found_syntax, all_syntax = find_non_sym_syntax_short(entitiy_name, doc, add_X, x_str)
key_sent_numb = key_sentences(all_found_syntax)
str_ret = ''
for s in all_syntax.keys():
if (s.split('_')[0] in key_sent_numb.keys()) or (return_all):
to_add = all_syntax[s]
if s in all_found_syntax.keys():
to_add = all_found_syntax[s]
else:
if to_add.rel == 'punct' and not openinig_punct(to_add.text):
str_ret = str_ret.rstrip()
str_ret += to_add.text
if (not openinig_punct(to_add.text)) and (to_add.text != ''):
str_ret += ' '
return str_ret
### ----------------------------- key entities block -----------------------------
def find_synt(doc, synt_id):
for synt in doc.syntax.tokens:
if synt.id == synt_id:
return synt
return None
def is_subj(doc, synt, recursion_list=[]):
''' Сообщает является ли слово подлежащим или частью сложного подлежащего. '''
if synt.rel == 'nsubj':
return True
if synt.rel == 'appos':
found_head = find_synt(doc, synt.head_id)
if found_head.id in recursion_list:
return False
return is_subj(doc, found_head, recursion_list + [synt.id])
return False
def find_subjects_in_syntax(doc):
''' Выдает словарик, в котором для каждой NER написано, является ли он
подлежащим в предложении.
Выдает стартовую позицию NER и было ли оно подлежащим (или appos)
'''
found_subjects = {}
current_synt_number = 0
current_tok_number = 0
for span in doc.spans:
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
found_subjects.update({span.start: 0})
diap = range(span.start, span.stop)
found_tokens, found_text, full_str, current_tok_number = find_tokens_in_diap_with_order(doc,
current_tok_number,
diap)
found, current_synt_number = find_synax_tokens_with_order(doc, current_synt_number, found_tokens,
found_text, full_str)
found_subjects.update({span.start: 0})
for synt in found:
if is_subj(doc, synt):
found_subjects.update({span.start: 1})
return found_subjects
def entity_weight(lst, c=1):
return c*lst[0]+lst[1]
def determine_subject(found_subjects, doc, new_agency_list, return_best=True, threshold=0.75):
''' Определяет ключевую NER и список самых важных NER, основываясь на том, сколько
раз каждая из них встречается в текста вообще и сколько раз в роли подлежащего '''
objects_arr = []
objects_arr_ners = []
should_continue = False
for span in doc.spans:
should_continue = False
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
if span.normal in new_agency_list:
continue
for i in range(len(objects_arr)):
t, lst = objects_arr[i]
if t.find(span.normal) >= 0:
lst[0] += 1
lst[1] += found_subjects[span.start]
should_continue = True
break
if span.normal.find(t) >= 0:
objects_arr[i] = (span.normal, [lst[0]+1, lst[1]+found_subjects[span.start]])
should_continue = True
break
if should_continue:
continue
objects_arr.append((span.normal, [1, found_subjects[span.start]]))
objects_arr_ners.append(span.normal)
max_weight = 0
opt_ent = 0
for obj in objects_arr:
t, lst = obj
w = entity_weight(lst)
if max_weight < w:
max_weight = w
opt_ent = t
if not return_best:
return opt_ent, objects_arr_ners
bests = []
for obj in objects_arr:
t, lst = obj
w = entity_weight(lst)
if max_weight*threshold < w:
bests.append(t)
return opt_ent, bests
text = '''В офисах Сбера начали тестировать технологию помощи посетителям в экстренных ситуациях. «Зеленая кнопка» будет
в зонах круглосуточного обслуживания офисов банка в Воронеже, Санкт-Петербурге, Подольске, Пскове, Орле и Ярославле.
В них находятся стенды с сенсорными кнопками, обеспечивающие связь с операторами центра мониторинга службы безопасности
банка. Получив сигнал о помощи, оператор центра может подключиться к объекту по голосовой связи. С помощью камер
видеонаблюдения он оценит обстановку и при необходимости вызовет полицию или скорую помощь. «Зеленой кнопкой» можно
воспользоваться в нерабочее для отделения время, если возникла угроза жизни или здоровью. В остальных случаях помочь
клиентам готовы сотрудники отделения банка. «Одно из направлений нашей работы в области ESG и устойчивого развития
— это забота об обществе. И здоровье людей как высшая ценность является его основой. Поэтому задача банка в области
безопасности гораздо масштабнее, чем обеспечение только финансовой безопасности клиентов. Этот пилотный проект
приурочен к 180-летию Сбербанка: мы хотим, чтобы, приходя в банк, клиент чувствовал, что его жизнь и безопасность
— наша ценность», — отметил заместитель председателя правления Сбербанка Станислав Кузнецов.'''
doc = analyze_doc(text)
key_entity = determine_subject(find_subjects_in_syntax(doc), doc, [])[0]
text_for_model = key_sentences_str(key_entity, doc, add_X=True, x_str='X', return_all=False)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 176 | 0.9504 | 0.6903 | 0.2215 |
| No log | 2.0 | 352 | 0.9065 | 0.7159 | 0.4760 |
| 0.8448 | 3.0 | 528 | 0.9687 | 0.7045 | 0.4774 |
| 0.8448 | 4.0 | 704 | 1.2436 | 0.7045 | 0.4686 |
| 0.8448 | 5.0 | 880 | 1.4809 | 0.7273 | 0.4630 |
| 0.2074 | 6.0 | 1056 | 1.5866 | 0.7330 | 0.5185 |
| 0.2074 | 7.0 | 1232 | 1.7056 | 0.7301 | 0.5210 |
| 0.2074 | 8.0 | 1408 | 1.6982 | 0.7415 | 0.5056 |
| 0.0514 | 9.0 | 1584 | 1.8088 | 0.7273 | 0.5203 |
| 0.0514 | 10.0 | 1760 | 1.9250 | 0.7102 | 0.4879 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| 14,591 |
fabriceyhc/bert-base-uncased-amazon_polarity | null | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- amazon_polarity
metrics:
- accuracy
model-index:
- name: bert-base-uncased-amazon_polarity
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.94647
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-amazon_polarity
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2945
- Accuracy: 0.9465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1782000
- training_steps: 17820000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.7155 | 0.0 | 2000 | 0.7060 | 0.4622 |
| 0.7054 | 0.0 | 4000 | 0.6925 | 0.5165 |
| 0.6842 | 0.0 | 6000 | 0.6653 | 0.6116 |
| 0.6375 | 0.0 | 8000 | 0.5721 | 0.7909 |
| 0.4671 | 0.0 | 10000 | 0.3238 | 0.8770 |
| 0.3403 | 0.0 | 12000 | 0.3692 | 0.8861 |
| 0.4162 | 0.0 | 14000 | 0.4560 | 0.8908 |
| 0.4728 | 0.0 | 16000 | 0.5071 | 0.8980 |
| 0.5111 | 0.01 | 18000 | 0.5204 | 0.9015 |
| 0.4792 | 0.01 | 20000 | 0.5193 | 0.9076 |
| 0.544 | 0.01 | 22000 | 0.4835 | 0.9133 |
| 0.4745 | 0.01 | 24000 | 0.4689 | 0.9170 |
| 0.4403 | 0.01 | 26000 | 0.4778 | 0.9177 |
| 0.4405 | 0.01 | 28000 | 0.4754 | 0.9163 |
| 0.4375 | 0.01 | 30000 | 0.4808 | 0.9175 |
| 0.4628 | 0.01 | 32000 | 0.4340 | 0.9244 |
| 0.4488 | 0.01 | 34000 | 0.4162 | 0.9265 |
| 0.4608 | 0.01 | 36000 | 0.4031 | 0.9271 |
| 0.4478 | 0.01 | 38000 | 0.4502 | 0.9253 |
| 0.4237 | 0.01 | 40000 | 0.4087 | 0.9279 |
| 0.4601 | 0.01 | 42000 | 0.4133 | 0.9269 |
| 0.4153 | 0.01 | 44000 | 0.4230 | 0.9306 |
| 0.4096 | 0.01 | 46000 | 0.4108 | 0.9301 |
| 0.4348 | 0.01 | 48000 | 0.4138 | 0.9309 |
| 0.3787 | 0.01 | 50000 | 0.4066 | 0.9324 |
| 0.4172 | 0.01 | 52000 | 0.4812 | 0.9206 |
| 0.3897 | 0.02 | 54000 | 0.4013 | 0.9325 |
| 0.3787 | 0.02 | 56000 | 0.3837 | 0.9344 |
| 0.4253 | 0.02 | 58000 | 0.3925 | 0.9347 |
| 0.3959 | 0.02 | 60000 | 0.3907 | 0.9353 |
| 0.4402 | 0.02 | 62000 | 0.3708 | 0.9341 |
| 0.4115 | 0.02 | 64000 | 0.3477 | 0.9361 |
| 0.3876 | 0.02 | 66000 | 0.3634 | 0.9373 |
| 0.4286 | 0.02 | 68000 | 0.3778 | 0.9378 |
| 0.422 | 0.02 | 70000 | 0.3540 | 0.9361 |
| 0.3732 | 0.02 | 72000 | 0.3853 | 0.9378 |
| 0.3641 | 0.02 | 74000 | 0.3951 | 0.9386 |
| 0.3701 | 0.02 | 76000 | 0.3582 | 0.9388 |
| 0.4498 | 0.02 | 78000 | 0.3268 | 0.9375 |
| 0.3587 | 0.02 | 80000 | 0.3825 | 0.9401 |
| 0.4474 | 0.02 | 82000 | 0.3155 | 0.9391 |
| 0.3598 | 0.02 | 84000 | 0.3666 | 0.9388 |
| 0.389 | 0.02 | 86000 | 0.3745 | 0.9377 |
| 0.3625 | 0.02 | 88000 | 0.3776 | 0.9387 |
| 0.3511 | 0.03 | 90000 | 0.4275 | 0.9336 |
| 0.3428 | 0.03 | 92000 | 0.4301 | 0.9336 |
| 0.4042 | 0.03 | 94000 | 0.3547 | 0.9359 |
| 0.3583 | 0.03 | 96000 | 0.3763 | 0.9396 |
| 0.3887 | 0.03 | 98000 | 0.3213 | 0.9412 |
| 0.3915 | 0.03 | 100000 | 0.3557 | 0.9409 |
| 0.3378 | 0.03 | 102000 | 0.3627 | 0.9418 |
| 0.349 | 0.03 | 104000 | 0.3614 | 0.9402 |
| 0.3596 | 0.03 | 106000 | 0.3834 | 0.9381 |
| 0.3519 | 0.03 | 108000 | 0.3560 | 0.9421 |
| 0.3598 | 0.03 | 110000 | 0.3485 | 0.9419 |
| 0.3642 | 0.03 | 112000 | 0.3754 | 0.9395 |
| 0.3477 | 0.03 | 114000 | 0.3634 | 0.9426 |
| 0.4202 | 0.03 | 116000 | 0.3071 | 0.9427 |
| 0.3656 | 0.03 | 118000 | 0.3155 | 0.9441 |
| 0.3709 | 0.03 | 120000 | 0.2923 | 0.9433 |
| 0.374 | 0.03 | 122000 | 0.3272 | 0.9441 |
| 0.3142 | 0.03 | 124000 | 0.3348 | 0.9444 |
| 0.3452 | 0.04 | 126000 | 0.3603 | 0.9436 |
| 0.3365 | 0.04 | 128000 | 0.3339 | 0.9434 |
| 0.3353 | 0.04 | 130000 | 0.3471 | 0.9450 |
| 0.343 | 0.04 | 132000 | 0.3508 | 0.9418 |
| 0.3174 | 0.04 | 134000 | 0.3753 | 0.9436 |
| 0.3009 | 0.04 | 136000 | 0.3687 | 0.9422 |
| 0.3785 | 0.04 | 138000 | 0.3818 | 0.9396 |
| 0.3199 | 0.04 | 140000 | 0.3291 | 0.9438 |
| 0.4049 | 0.04 | 142000 | 0.3372 | 0.9454 |
| 0.3435 | 0.04 | 144000 | 0.3315 | 0.9459 |
| 0.3814 | 0.04 | 146000 | 0.3462 | 0.9401 |
| 0.359 | 0.04 | 148000 | 0.3981 | 0.9361 |
| 0.3552 | 0.04 | 150000 | 0.3226 | 0.9469 |
| 0.345 | 0.04 | 152000 | 0.3731 | 0.9384 |
| 0.3228 | 0.04 | 154000 | 0.2956 | 0.9471 |
| 0.3637 | 0.04 | 156000 | 0.2869 | 0.9477 |
| 0.349 | 0.04 | 158000 | 0.3331 | 0.9430 |
| 0.3374 | 0.04 | 160000 | 0.4159 | 0.9340 |
| 0.3718 | 0.05 | 162000 | 0.3241 | 0.9459 |
| 0.315 | 0.05 | 164000 | 0.3544 | 0.9391 |
| 0.3215 | 0.05 | 166000 | 0.3311 | 0.9451 |
| 0.3464 | 0.05 | 168000 | 0.3682 | 0.9453 |
| 0.3495 | 0.05 | 170000 | 0.3193 | 0.9469 |
| 0.305 | 0.05 | 172000 | 0.4132 | 0.9389 |
| 0.3479 | 0.05 | 174000 | 0.3465 | 0.947 |
| 0.3537 | 0.05 | 176000 | 0.3277 | 0.9449 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.12.1
- Tokenizers 0.10.3
| 7,263 |
mrm8488/electricidad-small-finetuned-restaurant-sentiment-analysis | null | ---
language: es
tags:
- restaurant
- classification
- reviews
widget:
- text: "No está a la altura, no volveremos."
---
# Electricidad-small fine-tuned on restaurant review sentiment analysis dataset
Test set accuray: 0.86 | 225 |
jkhan447/sentiment-model-sample-27go-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-27go-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.5888888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-27go-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1765
- Accuracy: 0.5889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
| 1,450 |
IIC/roberta-base-bne-ranker | [
"LABEL_0"
] | ---
language:
- es
tags:
- sentence similarity # Example: audio
- passage reranking # Example: automatic-speech-recognition
datasets:
- IIC/msmarco_es
metrics:
- eval_MRR@10: 0.688
model-index:
- name: roberta-base-bne-ranker
results:
- task:
type: text similarity # Required. Example: automatic-speech-recognition
name: text similarity # Optional. Example: Speech Recognition
dataset:
type: IIC/msmarco_es # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: IIC/msmarco_es # Required. Example: Common Voice zh-CN
args: es # Optional. Example: zh-CN
metrics:
- type: MRR@10
value: 0.688
name: eval_MRR@10
---
This is a model to rank documents based on importance. It is trained on an [automatically translated version of MS Marco](https://huggingface.co/datasets/IIC/msmarco_es). After some experiments, the best configuration was to train for 2 epochs with learning rate 2e-5 and batch size 32.
Example of use:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder("IIC/roberta-base-bne-ranker", device="cpu")
question = "¿Cómo se llama el rey?"
contexts = ["Me encanta la canción de el rey", "Cuando el rey fue a Sevilla, perdió su silla", "El rey se llama Juan Carlos y es conocido por sus escándalos"]
similarity_scores = model.predict([[question, context] for context in contexts])
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. | 1,726 |
nickil/real-fake-news | null | ---
license: mit
---
Data: [https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) | 184 |
chenshuangcufe/Bert-job | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
edumunozsala/bertin_base_sentiment_analysis_es | [
"Negativo",
"Positivo"
] | ---
language: es
tags:
- sagemaker
- bertin
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
metrics:
- accuracy
model-index:
- name: bertin_base_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: "IMDb Reviews in Spanish"
type: IMDbreviews_es
metrics:
- name: Accuracy,
type: accuracy,
value: 0.898933
- name: F1 Score,
type: f1,
value: 0.8989063
- name: Precision,
type: precision,
value: 0.8771473
- name: Recall,
type: recall,
value: 0.9217724
widget:
- text: "Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
---
# Model bertin_base_sentiment_analysis_es
## **A finetuned model for Sentiment analysis in Spanish**
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container,
The base model is **Bertin base** which is a RoBERTa-base model pre-trained on the Spanish portion of mC4 using Flax.
It was trained by the Bertin Project.[Link to base model](https://huggingface.co/bertin-project/bertin-roberta-base-spanish)
Article: BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling
- Author = Javier De la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury,
- journal = Procesamiento del Lenguaje Natural,
- volume = 68, number = 0, year = 2022
- url = http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6403
## Dataset
The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages.
Sizes of datasets:
- Train dataset: 42,500
- Validation dataset: 3,750
- Test dataset: 3,750
## Intended uses & limitations
This model is intented for Sentiment Analysis for spanish corpus and finetuned specially for movie reviews but it can be applied to other kind of reviews.
## Hyperparameters
{
"epochs": "4",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "3e-05",
"model_name": "\"bertin-project/bertin-roberta-base-spanish\"",
"sagemaker_container_log_level": "20",
"sagemaker_program": "\"train.py\"",
}
## Evaluation results
- Accuracy = 0.8989333333333334
- F1 Score = 0.8989063750333421
- Precision = 0.877147319104633
- Recall = 0.9217724288840262
## Test results
## Model in action
### Usage for Sentiment Analysis
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("edumunozsala/bertin_base_sentiment_analysis_es")
model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/bertin_base_sentiment_analysis_es")
text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
output = outputs.logits.argmax(1)
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
| 3,297 |
Remicm/sentiment-analysis-model-for-socialmedia | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: sentiment-analysis-model-for-socialmedia
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9297083333333334
- name: F1
type: f1
value: 0.9298923658729169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-analysis-model-for-socialmedia
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2368
- Accuracy: 0.9297
- F1: 0.9299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,523 |
Santarabantoosoo/PathologyBERT-meningioma | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: PathologyBERT-meningioma
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PathologyBERT-meningioma
This model is a fine-tuned version of [tsantos/PathologyBERT](https://huggingface.co/tsantos/PathologyBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8123
- Accuracy: 0.8783
- Precision: 0.25
- Recall: 0.0833
- F1: 0.125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3723 | 1.0 | 71 | 0.5377 | 0.7652 | 0.0588 | 0.0833 | 0.0690 |
| 0.3363 | 2.0 | 142 | 0.4191 | 0.8783 | 0.25 | 0.0833 | 0.125 |
| 0.2773 | 3.0 | 213 | 0.4701 | 0.8870 | 0.3333 | 0.0833 | 0.1333 |
| 0.2303 | 4.0 | 284 | 0.5831 | 0.8957 | 0.5 | 0.0833 | 0.1429 |
| 0.1657 | 5.0 | 355 | 0.7083 | 0.8348 | 0.1111 | 0.0833 | 0.0952 |
| 0.1228 | 6.0 | 426 | 1.0324 | 0.8 | 0.0769 | 0.0833 | 0.08 |
| 0.0967 | 7.0 | 497 | 0.8103 | 0.8696 | 0.2 | 0.0833 | 0.1176 |
| 0.0729 | 8.0 | 568 | 0.8711 | 0.8696 | 0.2 | 0.0833 | 0.1176 |
| 0.0624 | 9.0 | 639 | 0.7968 | 0.8783 | 0.25 | 0.0833 | 0.125 |
| 0.0534 | 10.0 | 710 | 0.8123 | 0.8783 | 0.25 | 0.0833 | 0.125 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.10.1
- Datasets 1.15.0
- Tokenizers 0.10.3
| 2,314 |
RomanCast/camembert-miam-loria-finetuned | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",
"LABEL_3",
"LABEL_30",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
language:
- fr
--- | 22 |
waboucay/camembert-large-finetuned-rua_wl_3_classes | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 75.3 | 74.9 |
| test | 75.8 | 75.3 | | 367 |
boychaboy/MNLI_bert-large-cased | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
lamhieu/distilbert-base-multilingual-cased-vietnamese-topicifier | [
"0",
"100 metres",
"A Song of Ice and Fire",
"A Tale for the Time Being",
"ARM Holdings",
"Abigail Johnson",
"Abiogenesis",
"Abortion",
"Abraham Lincoln",
"Abstract art",
"Abu Nuwas",
"Academic degree",
"Accent (sociolinguistics)",
"Achaemenid Empire",
"Acid-base reaction",
"Acoustic guitar",
"Acrophobia",
"Acrylic paint",
"Actinium",
"Action film",
"Activision Blizzard Inc",
"Activism",
"Acura",
"Adam Levine",
"Adam Smith",
"Addiction",
"Adi Shankara",
"Adobe Inc",
"Adolescence",
"Adolf Hitler",
"Adoption",
"Adult",
"Aesthetics",
"Africa",
"Afro-Asiatic languages",
"Afterlife",
"Agatha Christie's Poirot",
"Age of Discovery",
"Age of Enlightenment",
"Ageing",
"Agoraphobia",
"Agriculture",
"Aircraft",
"Aircraft pilot",
"Airplane",
"Akamai Technologies Inc",
"Alabama",
"Alain Wertheimer",
"Alan Turing",
"Alaska",
"Albert Einstein",
"Alcohol",
"Alcoholic drink",
"Alcoholism",
"Aldi",
"Alexander McQueen",
"Alexander the Great",
"Alexey Mordashov",
"Alfred Hitchcock",
"Algae",
"Algebra",
"Algeria",
"Algorithm",
"AliOS",
"Alice Walton",
"Alight Solutions",
"Alison Sudol",
"Allergic response",
"Allergy",
"Allergy to cats",
"Alloy",
"Allstate",
"Alpaca",
"Alphabet",
"Alpine Linux",
"Alpine skiing",
"Alps",
"Aluminium",
"Amadeus IT Group SA",
"Amancio Ortega Gaona",
"Amateur geology",
"Amazon (company)",
"Amazon Echo",
"Amazon Kindle",
"Amazon River",
"Amazon rainforest",
"American Civil War reenactment",
"American Eagle Outfitters",
"American Idol",
"American Red Cross",
"American Revolution",
"American Society for the Prevention of Cruelty to Animals",
"American folk music",
"American literature",
"Americium",
"Amphibian",
"Analysis",
"Anarchism",
"Anatomy",
"Ancient Egypt",
"Ancient Greece",
"Ancient Greek philosophy",
"Ancient Rome",
"Ancient history",
"Andean civilizations",
"Andes",
"Android",
"Anesthesia",
"Anger",
"Angle",
"Angling",
"Animal",
"Animal husbandry",
"Animal print",
"Animal rights",
"Animal shelter",
"Animation",
"Anime",
"Anne of Green Gables",
"Ant-Man (film)",
"Ant-Man and the Wasp (film)",
"Antarctica",
"Anthropology",
"Antibiotic",
"Antimony",
"Antoine Lavoisier",
"Anxiety",
"Anxiety disorder",
"Apartment",
"Appalachian Trail",
"Apple",
"Apple Inc.",
"Apple MacBook",
"Apple iOS",
"Apple iPad",
"Apple iPadOS",
"Apple iPhone",
"Apple macOS",
"Apple watch",
"Apple watchOS",
"Appletini",
"Appleton, Wisconsin",
"Aquaman (film)",
"Aquarium",
"Arabic",
"Arabic alphabet",
"Archaea",
"Archaeology",
"Archimedes",
"Architecture",
"Arctic",
"Arctic Ocean",
"Area",
"Argentina",
"Argon",
"Aristotle",
"Armadillo",
"Armour",
"Army",
"Arnold Schwarzenegger",
"Arrow Electronics Inc",
"Arsenal F.C.",
"Arsenic",
"Art",
"Art movement",
"Art museum",
"Arthropod",
"Artificial intelligence",
"Artist",
"Ashok Leyland",
"Ashoka",
"Asia",
"Asseco Poland SA",
"Association football",
"Assyria",
"Astatine",
"Asteroid",
"Asthma",
"Aston Martin",
"Astronomy",
"Atheism",
"Athletics (physical culture)",
"Atlantic Ocean",
"Atmosphere of Earth",
"Atom",
"Audi",
"Augustus",
"Austin, Texas",
"Australia",
"Author",
"Autism",
"Auto mechanic",
"Autodesk Inc",
"Autograph",
"Automatic Data Processing Inc",
"Automotive industry",
"Autonomous car",
"Avenged Sevenfold",
"Avengers (comics)",
"Avengers Age of Ultron (film)",
"Avengers Endgame (film)",
"Avengers Infinity War (film)",
"Avicenna",
"Avnet Inc",
"Aztecs",
"BAIC",
"BBC",
"BMW",
"BYD Auto",
"Bac Lieu",
"Bachelor of Science in Nursing",
"Bachelor's degree",
"Back pain",
"Backstreet Boys",
"Backstroke",
"Bacon",
"Bacteria",
"Bada",
"Badminton",
"Bagel",
"Baileys Irish Cream",
"Bajaj Auto",
"Bakery",
"Ballet",
"Ballroom dance",
"Baltimore",
"Baltimore Orioles",
"Banana",
"Band (rock and pop)",
"Bangladesh",
"Banh Bao (food)",
"Banh Beo (food)",
"Banh Bot Loc (food)",
"Banh Canh Cua (food)",
"Banh Chung (food)",
"Banh Cuon (food)",
"Banh Gio (food)",
"Banh Hoi (food)",
"Banh It La Gai (food)",
"Banh La (food)",
"Banh Mi (food)",
"Banh Mi Hot Ga Op La (food)",
"Banh Tet (food)",
"Banh Tom (food)",
"Banh Trang (food)",
"Banh Uot (food)",
"Banh Xeo (food)",
"Bank",
"Bank teller",
"Barbados",
"Barbershop music",
"Barbie",
"Barbie Girl",
"Barista",
"Barium",
"Baseball",
"Basketball",
"Bathing",
"Bathroom singing",
"Batman",
"Batman & Robin (film)",
"Batman (film)",
"Batman Forever (film)",
"Batman Returns (film)",
"Batman v Superman Dawn of Justice (film)",
"Baton Rouge, Louisiana",
"Battlestar Galactica (2004 TV series)",
"Be Thui (food)",
"Beach",
"Beadwork",
"Beagle",
"Beard",
"Beastie Boys",
"Beauty pageant",
"Beauty salon",
"Beer",
"Beetroot",
"Begins (film)",
"Beijing",
"Belief",
"Ben Thuy",
"Ben Tre",
"Benchmark Electronics Inc",
"Bengal cat",
"Bengali",
"Benjamin Franklin",
"Bentley",
"Berkelium",
"Bernard Arnault",
"Beryllium",
"Best Buy",
"Betta",
"Bhagavad Gita",
"Bible",
"Bic Camera Inc",
"Bicycle",
"Bien Hoa",
"Big Bang",
"Big Brother (franchise)",
"Big Mac",
"Bill Gates",
"Biochemistry",
"Biodiversity",
"Biology",
"Biotechnology",
"Bipolar disorder",
"Bird",
"Bird vocalization",
"Birds of Prey (film)",
"Birdwatching",
"Birth control",
"Birth order",
"Bisexuality",
"Bismuth",
"Bitcoin",
"Black Death",
"Black Jack (gum)",
"Black Panther (film)",
"Black Rock Desert",
"Black hole",
"BlackBerry OS",
"Blackjack",
"Bloating",
"Blond",
"Blood",
"Blue",
"Blue Ridge Parkway",
"Bo Bia (food)",
"Bo Kho (food)",
"Bo Luc Lac (food)",
"Bo Ne (food)",
"Bo Nhung Dam (food)",
"Bo Nuong La Lot (food)",
"Board game",
"Bob Ross",
"Body piercing",
"Bohrium",
"Boiled egg",
"Bon Iver",
"Bone fracture",
"Book",
"Book discussion club",
"Border Collie",
"Boredom",
"Boron",
"Boston Celtics",
"Boston Terrier",
"Botany",
"Bow and arrow",
"Bowie knife",
"Boxer (dog)",
"Boy Scouts",
"Brahmic scripts",
"Brain",
"Brazil",
"Bread",
"Breakfast",
"Brembo",
"Brewery",
"Bridge",
"Brie",
"British Empire",
"Britney Spears",
"Broadcasting",
"Broadridge Financial Solutions Inc",
"Broadway theatre",
"Broccoli",
"Bromine",
"Bronze",
"Bronze Age",
"Bruce Lee",
"Brunch",
"Bruno Mars",
"Buddhism",
"Budweiser",
"Buffalo Bills",
"Bugs",
"Buick",
"Bulldog",
"Bun Bo Hue (food)",
"Bun Cha Hanoi (food)",
"Bun Mam (food)",
"Bun Rieu (food)",
"Buon Me Thuot",
"Burrito",
"Business",
"Butcher",
"Butterfly",
"Butterfly stroke",
"Byzantine Empire",
"CECONOMY AG",
"Ca Kho To (food)",
"Ca Ri Ga (food)",
"Cabana boy",
"Cabernet Sauvignon",
"Cadmium",
"Caesium",
"Caffeine",
"Cairo",
"Cake",
"Cake decorating",
"Calcium",
"Calculus",
"Calendar",
"California Love",
"Californium",
"Calligraphy",
"Cam Ranh",
"Camera",
"Camping",
"Can Tho",
"Canada",
"Canal",
"Cancer",
"Candle",
"Candy",
"Canh Chua Ca (food)",
"Cannabis (drug)",
"Cape Hatteras",
"Capgemini SE",
"Capitalism",
"Cappuccino",
"Captain America Civil War (film)",
"Captain America The First Avenger (film)",
"Captain America The Winter Soldier (film)",
"Car",
"Carbon",
"Carbon dioxide",
"Card game",
"Cardiovascular disease",
"Carl Icahn",
"Carl Linnaeus",
"Carlos Slim Helu",
"Carnivore",
"Carrie Underwood",
"Carrot",
"Cartography",
"Cartoon",
"Cartoon Network",
"Cartoonist",
"Casino",
"Caspian Sea",
"Cat",
"Cat people and dog people",
"Catalysis",
"Catherine the Great",
"Catholic Church",
"Catholic school",
"Cattle",
"Catwoman (film)",
"Cereal",
"Cerium",
"Certified Public Accountant",
"Cha Ca Thang Long (food)",
"Cha Gio (food)",
"Cha Lua (food)",
"Chalk",
"Changan",
"Chanh Muoi (food)",
"Channing Tatum",
"Chao Tom (food)",
"Charity shop",
"Charlemagne",
"Charles Darwin",
"Charles Dickens",
"Charles Koch",
"Charles Tran Van Lam",
"Charlie Chaplin",
"Che Bap (food)",
"Che Sam Bo Luong (food)",
"Che Troi Nuoc (food)",
"Cheese",
"Cheeseburger",
"Cheesecake",
"Cheetos",
"Chef",
"Chemical bond",
"Chemical compound",
"Chemical element",
"Chemical reaction",
"Chemistry",
"Chevrolet",
"Chevrolet Corvette",
"Chevrolet Impala",
"Chevrolet Silverado",
"Chevrolet Tahoe",
"Chicago Loop",
"Chicago metropolitan area",
"Chicago-style pizza",
"Chicken",
"Chicken nugget",
"Child",
"Childhood sweetheart",
"China",
"Chinese characters",
"Chlorine",
"Cho Lon",
"Chocolate",
"Chocolate brownie",
"Chocolate cake",
"Chocolate chip cookie",
"Choir",
"Choreography",
"Christianity",
"Christmas",
"Christopher Columbus",
"Chrome OS",
"Chromium",
"Chromium OS",
"Chrysler",
"Church music",
"Cie Automotive Sa",
"Ciena Corp",
"Cinematography",
"Circle",
"Circuit court",
"Circulatory system",
"Circus",
"Citrix Systems Inc",
"Citroen",
"City",
"City council",
"Civil engineering",
"Civilization",
"Clarinet",
"Classical mechanics",
"Classical music",
"Clearwater Beach",
"Cleat (shoe)",
"Cleveland Cavaliers",
"Climate",
"Climate change",
"Clock",
"Clothing",
"Cloud",
"Clubbing (subculture)",
"Coal",
"Cobalt",
"Coca-Cola",
"Coco Chanel",
"Coffee",
"Cognizant Technology Solutions Corp",
"Cold War",
"Colin Huang",
"Collie",
"Colombia",
"Colonialism",
"Color",
"Color blindness",
"Colorado",
"Columbia Pictures",
"Combinatorics",
"Comedy",
"Comet",
"Comic book",
"Comics",
"CommScope Holding",
"Common cold",
"Communication",
"Communism",
"Community",
"Community theatre",
"Commuting",
"Compass",
"Complex number",
"Composer",
"Compulsive hoarding",
"Computer",
"Computer engineering",
"Computer programming",
"Computer repair technician",
"Computer science",
"Computing and information technology",
"Con Son",
"Concert",
"Concrete",
"Confidence",
"Confucianism",
"Confucius",
"Conic section",
"Conor McGregor",
"Consciousness",
"Conservatism",
"Constantine (film)",
"Constitution",
"Construction",
"Contact lens",
"Contemporary philosophy",
"Contemporary slavery",
"Continent",
"Convertible",
"Cookie",
"Cookie jar",
"Cooking",
"Coors Brewing Company",
"Copernicium",
"Copper",
"Cord-cutting",
"Corn dog",
"Corporation",
"Cosmetics",
"Costume",
"Country",
"Country music",
"Courtship",
"Creed (band)",
"Crime",
"Crochet",
"Cross country running",
"Cross-country skiing (sport)",
"Crossword",
"Cruise ship",
"Crunch Fitness",
"Crusades",
"Cryptography",
"Crystal healing",
"Cue sports",
"Culture",
"Culture of Chicago",
"Cupcake",
"Curium",
"Cut of beef",
"Cyanogen OS",
"Cyrillic script",
"Cyrus the Great",
"DC Comics",
"DNA",
"DXC Technology Co.",
"Da Lat",
"Da Nang",
"Dacia",
"Daft Punk",
"Daihatsu",
"Dallas",
"Dam",
"Dan Gilbert",
"Dance",
"Dance improvisation",
"Dancing with the Stars",
"Dang Thi Ngoc Thinh",
"Dante Alighieri",
"Darmstadtium",
"Dassault Systemes SA",
"Dating",
"David & Simon Reuben",
"David Copperfield",
"David Mach",
"David Thomson, 3rd Baron Thomson of Fleet",
"Day",
"Death",
"Death metal",
"Debt",
"Debutante",
"Deep South",
"Deity",
"Del Taco",
"Delhi",
"Delphi Automotive",
"Democracy",
"Democratic Party (United States)",
"Democratic Republic of the Congo",
"Denmark",
"Denso",
"Dental braces",
"Dental hygienist",
"Dentist",
"Dentistry",
"Denver Art Museum",
"Denver Broncos",
"Depression (mood)",
"Desert",
"Design",
"Detroit",
"Detroit Red Wings",
"Detroit Tigers",
"Developmental disorder",
"Diabetes",
"Diabetes mellitus",
"Dictatorship",
"Diet (nutrition)",
"Dieter Schwarz",
"Dieting",
"Dietrich Mateschitz",
"Digestion",
"Digital art",
"Dimension",
"Ding Lei",
"Dinh Phe De",
"Dinh Tien Hoang",
"Dinosaur",
"Diplomacy",
"Dirty Harry",
"Disability",
"Disc jockey",
"Disco",
"Discrimination",
"Disease",
"Dismissal of James Comey",
"Distance education",
"Divorce",
"Dixons Carphone Plc",
"Dmitri Mendeleev",
"Do Muoi",
"Do Quang Giai",
"Dobermann",
"Doctor Strange (film)",
"Dodge",
"Dog",
"Dog daycare",
"Dolphin",
"Domestication",
"Donald Bren",
"Donald Trump",
"Dongfeng",
"Donna Karan",
"Dr Pepper",
"Dragon",
"Drake (musician)",
"Drawing",
"Dream",
"DreamWorks",
"Drink",
"Drinking culture",
"Drinking water",
"Drug",
"Drum kit",
"Dublin",
"Dubnium",
"Ducati Motor Holding S.p.A.",
"Dune (novel)",
"Duong Van Minh",
"Duramax V8 engine",
"Dust",
"Dysprosium",
"E-book",
"Ear",
"Early human migrations",
"Early modern period",
"Earth",
"Earth science",
"Earthquake",
"Eastern Orthodox Church",
"Eclipse",
"Ecology",
"Economics",
"Economy",
"Ecosystem",
"Ecstatic dance",
"Ed Sheeran",
"Edom Technology",
"Education",
"Educational technology",
"Educational trail",
"Eggplant",
"Egypt",
"Egyptian pyramids",
"Einsteinium",
"Elaine Marshall",
"Electric battery",
"Electric light",
"Electric motor",
"Electric violin",
"Electrician",
"Electricity",
"Electromagnetic radiation",
"Electromagnetism",
"Electron",
"Electronic Arts Inc",
"Electronic dance music",
"Electronic music",
"Electronics",
"Elementary school",
"Elementary school (United States)",
"Elizabeth I",
"Elon Musk",
"Emily Dickinson",
"Emmy Noether",
"Emotion",
"Emotional detachment",
"Empire (2015 TV series)",
"Employment",
"Ender's Game",
"Energy",
"Engine",
"Engineering",
"English literature",
"Entertainment",
"Entomology",
"Environmental engineering",
"Environmentalism",
"Epilepsy",
"Epileptic seizure",
"Epistemology",
"Equation",
"Equestrianism",
"Erbium",
"Ergonomic keyboard",
"Eric Schmidt",
"Eric Yuan",
"Erosion",
"Ethics",
"Ethiopia",
"Ethnic group",
"Euclid",
"Eukaryote",
"EuroBasket",
"Europe",
"European Union",
"Europium",
"Eva Gutowski",
"Everyday life",
"Evolution",
"Ex (relationship)",
"Exercise",
"Exploitation of labour",
"Explorers (film)",
"Explosive",
"Exponentiation",
"Extinction",
"Extra (acting)",
"Extraterrestrial life",
"Extraversion and introversion",
"Extreme Couponing",
"Eye",
"Eye contact",
"FC Barcelona",
"FIH Mobile",
"Face Off (TV series)",
"Factory",
"Fair",
"Fairy tale",
"Fall Out Boy",
"Family",
"Family Guy",
"Family farm",
"Famine",
"Fantasy football (American)",
"Farmer",
"Farmers' market",
"Fascism",
"Fashion",
"Fashion design",
"Faurecia",
"Fear",
"Fear of the dark",
"Federal judiciary of the United States",
"Feminism",
"Fender Musical Instruments Corporation",
"Ferdinand Magellan",
"Fermium",
"Ferrari",
"Ferret",
"Fertility factor (demography)",
"Fertilizer",
"Festival",
"Fiat",
"Fibromyalgia",
"Fiction",
"Fiction writing",
"Fidelity National Information Services Inc",
"Field hockey",
"Film",
"Filmmaking",
"Finance",
"Fire",
"Fire OS",
"Firearm",
"Firefox OS",
"Fiscal conservatism",
"Fiserv Inc",
"Fish",
"Fish trap",
"Fisherman",
"Fishing",
"Fishing tackle",
"Fishing vessel",
"Flash (Barry Allen)",
"Flerovium",
"Flood",
"Florida",
"Flower",
"Fluorine",
"Fly fishing",
"Fnac Darty SA",
"Folk music",
"Folklore",
"Food",
"Food allergy",
"Food and health",
"Food preservation",
"Food truck",
"Football",
"Force",
"Ford F-Series",
"Ford Motor Company",
"Ford Mustang",
"Ford Mustang (first generation)",
"Foreclosure",
"Forensic Files",
"Forest",
"Forgetting",
"Forrest Mars Jr",
"Fortification",
"Fossil fuel",
"Foster care",
"Foton",
"Foxconn Industrial Internet",
"Fraction",
"France",
"Francium",
"Francois Pinault",
"Francoise Bettencourt-Meyers",
"Frank Ocean",
"Frank Sinatra",
"Franz Kafka",
"Freckle",
"Frederic Chopin",
"Free will",
"French Bulldog",
"French Revolution",
"French cuisine",
"French fries",
"French kiss",
"Frick Collection",
"Frida Kahlo",
"Fried chicken",
"Friedrich Nietzsche",
"Friends (series)",
"Friendship",
"Fruit",
"Fruit picking",
"Fruitarianism",
"Fungus",
"Furniture",
"Fyodor Dostoevsky",
"GOME Electrical Appliances",
"Gadget",
"Gadolinium",
"Galaxy",
"Galileo Galilei",
"Gallium",
"Gambling",
"Game",
"Game of Thrones",
"Game show",
"GameStop Corp",
"Ganges",
"Gap year",
"Garden",
"Gardening",
"Garfield",
"Gary Numan",
"Gasoline",
"Gastroenteritis",
"Gautama Buddha",
"Geely",
"Gemalto",
"Gemini (astrology)",
"Gender",
"Gene",
"General Electric",
"Genetic engineering",
"Genetically modified organism",
"Genetics",
"Genghis Khan",
"Genius",
"Gennady Timchenko",
"Genocide",
"Geocaching",
"Geographical regions",
"Geography",
"Geology",
"Geometry",
"Geometry and topology",
"George Foreman Grill",
"George Washington",
"Georgia (U.S. state)",
"Gerard Wertheimer",
"German Shepherd",
"German language",
"Germanium",
"Germany",
"Ghost",
"Ghost hunting",
"Gia Long",
"Giant panda",
"Gibson Les Paul",
"Gina Rinehart",
"Giovanni Ferrero",
"Glacier",
"Glass",
"Glasses",
"Global Payments Inc",
"Globalization",
"Go-kart",
"Goalkeeper (association football)",
"God",
"Gold",
"Golden Gate Bridge",
"Golden Retriever",
"Golf",
"Golf Channel",
"Gone with the Wind (film)",
"Good Burger",
"Good and evil",
"Goodfellas",
"Goodwill Industries",
"Google LLC",
"Google Pixel",
"Gossip",
"Government",
"Graduate school",
"Grammar",
"Grand Canyon",
"Grand Rapids, Michigan",
"Grand Slam (tennis)",
"Grand Theft Auto (video game)",
"Granny Smith",
"Graphic design",
"Grasshopper",
"Grassland",
"Grateful Dead",
"Gravity",
"Great Barrier Reef",
"Great Basin Desert",
"Great Depression",
"Great Lakes",
"Great Pyramid of Giza",
"Great Wall",
"Great Wall of China",
"Great white shark",
"Greece",
"Greek alphabet",
"Green Eggs and Ham",
"Green Lantern (film)",
"Grey's Anatomy",
"Grilling",
"Grocery store",
"Grunge",
"Guardians of the Galaxy (film)",
"Guardians of the Galaxy Vol. 2 (film)",
"Guitar",
"Gummi candy",
"Gunpowder",
"Gupta Empire",
"Gym",
"HBO",
"HCL Technologies",
"HIV AIDS",
"Ha Long",
"Hachette (publisher)",
"Hai Duong",
"Haiphong",
"Hair",
"Hair loss",
"Halloween",
"Halloween costume",
"Halo (series)",
"Halo 3",
"Hamburger",
"Hamilton (musical)",
"Hammurabi",
"Han dynasty",
"Hanoi",
"Happiness",
"Haribo",
"Harley-Davidson",
"Hassanal Bolkiah",
"Hassium",
"Hatshepsut",
"Hau Ly Nam De",
"Hawaii",
"He Xiangjian",
"Headphones",
"Health",
"Healthy diet",
"Hearse",
"Heart",
"Heat",
"Heavy metal music",
"Height",
"Helianthus",
"Helium",
"Henry Ford",
"Heredity",
"Hero",
"Herodotus",
"High school football",
"Higher education in the United States",
"Hiking",
"Himalayas",
"Hindu",
"Hinduism",
"Hino",
"Hip hop music",
"Hippocrates",
"Hippopotamus",
"Historical fiction",
"History",
"History of Africa",
"History of Asia",
"History of Earth",
"History of East Asia",
"History of Europe",
"History of India",
"History of Japan",
"History of North America",
"History of Oceania",
"History of South America",
"History of Vietnam",
"History of agriculture",
"History of architecture",
"History of art",
"History of film",
"History of libraries",
"History of literature",
"History of mathematics",
"History of medicine",
"History of music",
"History of paper",
"History of science",
"History of tattooing",
"History of technology",
"History of the Middle East",
"History of vegetarianism",
"Hitchhiking",
"Ho Chi Minh",
"Ho Chi Minh City",
"Hoarding",
"Hockey",
"Hokusai",
"Holden",
"Hollywood",
"Holmium",
"Holy Roman Empire",
"Home",
"Homebrewing",
"Homer",
"Homeschooling",
"Homosexuality",
"Honda",
"Honda Civic",
"Hong Kong",
"Hop Along",
"Horror fiction",
"Horror film",
"Horse",
"Horse racing",
"Horse training",
"Hospital",
"Hostage",
"House dust mite",
"Houseboat",
"Housewife",
"Hue",
"Human",
"Human behavior",
"Human body",
"Human cannibalism",
"Human height",
"Human history",
"Human migration",
"Human rights",
"Human sexuality",
"Humane society",
"Hummus",
"Humour",
"Hung Anh Vuong",
"Hung Diep Vuong",
"Hung Dinh Vuong",
"Hung Huy Vuong",
"Hung Hy Vuong",
"Hung Quoc Vuong",
"Hung Tao Vuong",
"Hung Trieu Vuong",
"Hung Trinh Vuong",
"Hung Uy Vuong",
"Hung Vi Vuong",
"Hung Viet Vuong",
"Hunting",
"Husky",
"Hybrid vehicle",
"Hydrogen",
"Hydropower",
"Hygiene",
"Hypochondriasis",
"Hyundai Motor Company",
"IBIDEN",
"IBM",
"IPhone",
"Ibn Battuta",
"Ibn Khaldun",
"Ice cream",
"Ice hockey",
"Iced tea",
"Iceland",
"Ideology",
"Iguana",
"Imagine Dragons",
"Immanuel Kant",
"Immigration to the United States",
"Immortality",
"Immune system",
"Imperialism",
"Inca Empire",
"Independent music",
"India",
"Indian Ocean",
"Indian cuisine",
"Indie rock",
"Indigenous peoples",
"Indium",
"Indo-European languages",
"Indonesia",
"Indus Valley Civilisation",
"Industrial Revolution",
"Infant",
"Infection",
"Infiniti",
"Infinity",
"Influencer marketing",
"Influenza",
"Infrastructure",
"Injury",
"Inline skates",
"Inline skating",
"Inner critic",
"Inorganic chemistry",
"Insect",
"Insurance",
"Insurance broker",
"Integer",
"Integrated circuit",
"Intel 80386",
"Intel Corporation",
"Intelligence",
"Interior design",
"Internal combustion engine",
"International Monetary Fund",
"International Red Cross and Red Crescent Movement",
"International System of Units",
"Internet",
"Internet Relay Chat",
"Internet access",
"Intuit Inc",
"Invention",
"Iodine",
"Iran",
"Iridium",
"Iron",
"Iron Age",
"Iron Maiden",
"Iron Man (film)",
"Iron Man 3 (film)",
"Iron supplement",
"Isaac Newton",
"Isaiah Rashad",
"Islam",
"Islamic Golden Age",
"Island",
"Israel",
"Istanbul",
"Isuzu",
"It (2017 film)",
"Italian Americans",
"Italian cuisine",
"Italy",
"Iveco",
"Ivy League",
"J. K. Rowling",
"JAC Motors",
"JB Hi-Fi",
"Jabil Inc",
"Jabir ibn Hayyan",
"Jack Ma",
"Jacqueline Mars",
"Jaguar",
"Jaguar Cars",
"Jainism",
"Jakarta",
"Jamaica",
"James Clerk Maxwell",
"James Cook",
"James Dyson",
"James Joyce",
"James Ratcliffe",
"James Simons",
"Jane Austen",
"Janitor",
"Japan",
"Jason Mraz",
"Jazz",
"Jeep",
"Jeff Bezos",
"Jeopardy!",
"Jerusalem",
"Jess Greenberg",
"Jesus",
"Jetengine",
"Jewellery",
"Jim Carrey",
"Jim Walton",
"Jimi Hendrix",
"Jimmy Fallon",
"Joan of Arc",
"Johann Sebastian Bach",
"Johann Wolfgang von Goethe",
"Johannes Gutenberg",
"John Lennon",
"John Locke",
"John Mars",
"John Menard",
"John Muir Trail",
"Johnny Cash",
"Joke",
"Jonah Hex (film)",
"Jorge Paulo Lemann",
"Joseph Safra",
"Joseph Stalin",
"Journalism",
"Journalist",
"Judaism",
"Juggling",
"Juicing",
"Julia Koch",
"Julius Caesar",
"Jupiter",
"Justice",
"Justin Bieber",
"Justin Timberlake",
"K-pop",
"KaiOS",
"Kale",
"Karaoke",
"Karate",
"Karl Marx",
"Katie Perry",
"Katy Perry",
"Kayak",
"Kayaking",
"Ken Griffin",
"Kentucky",
"Kenworth",
"Kenya",
"Kesha",
"Ketogenic diet",
"Kia",
"Kick scooter",
"Kid Rock",
"Kien Phuc",
"Kindergarten",
"Kings of Leon",
"Kinh Duong Vuong",
"Kinship",
"Kiss",
"Kitten",
"Knife",
"Knitting",
"Knowledge",
"Kobe beef",
"Koi",
"Kojima",
"Komodo dragon",
"Kon Tum",
"Konami Holdings Corp",
"Korn",
"Krav Maga",
"Krypton",
"Kubuntu",
"Kurt Godel",
"Kyoto",
"LG International Corp",
"LGBT parenting",
"Labrador Retriever",
"Lac Long Quan",
"Lactose",
"Lactose intolerance",
"Lada",
"Lady Gaga",
"Lager",
"Lagos",
"Lake",
"Lake Victoria",
"Lamborghini",
"Land",
"Land Rover",
"Language",
"Lanthanum",
"Lao Cai",
"Laozi",
"Larry Ellison",
"Larry Page",
"Las Vegas",
"Lasagne",
"Laser",
"Late modern period",
"Latin",
"Latin script",
"Laurene Powell Jobs",
"Law",
"Law firm",
"Law school",
"Law school in the United States",
"Lawn game",
"Lawrencium",
"Lawyer",
"Laziness",
"Le Dai Hanh",
"Le Duan",
"Le Duc Anh",
"Le Hong Phong",
"Le Kha Phieu",
"Le Long Dinh",
"Lead",
"League (film)",
"League of Legends",
"Learning",
"Leather",
"Lee Kun-Hee",
"Lee Shau Kee",
"Len Blavatnik",
"Lens",
"Leo Tolstoy",
"Leonard Lauder",
"Leonardo Del Vecchio",
"Leonardo DiCaprio",
"Leonardo da Vinci",
"Leonhard Euler",
"Leonid Mikhelson",
"Leprechaun",
"Lesbian",
"Lexus",
"Li Bai",
"Li Ka-shing",
"Liberalism",
"Liberty",
"Library",
"Life",
"Lifeguard",
"Light",
"Lightning McQueen",
"Lilium",
"Lincoln Motor Company",
"Lindsey Stirling",
"Linear algebra",
"Linebacker",
"Linguine",
"Linguistics",
"Linkin Park",
"Linux",
"List of Walt Disney Pictures films",
"List of chicken dishes",
"List of orphans and foundlings",
"List of tourist attractions in Paris",
"Literacy",
"Literature",
"Lithium",
"Live action role-playing game",
"Liver",
"Livermorium",
"Lizard",
"Logarithm",
"Logic",
"Lollipop",
"London",
"London Marathon",
"Long Xuyen",
"Long hair",
"Long-distance running",
"Louis Armstrong",
"Louis Pasteur",
"Louis Vuitton",
"Louvre",
"Love",
"Lubuntu",
"Lucy Maud Montgomery",
"Ludwig van Beethoven",
"LuneOS",
"Lung",
"Lutetium",
"Luxury yacht",
"Ly Anh Tong",
"Ly Cao Tong",
"Ly Hue Tong",
"Ly Nam De",
"Ly Nhan Tong",
"Ly Thai Tong",
"Ly Than Tong",
"Ly Thanh Tong",
"M.video PJSC",
"Ma Huateng",
"MacKenzie Scott",
"Macaroni and cheese",
"Madonna (entertainer)",
"Madrid",
"Magazine",
"Magic Mike",
"Magic The Gathering",
"Magna",
"Magnesium",
"Magneti Marelli",
"Magnetism",
"Maha Vajiralongkorn",
"Mahatma Gandhi",
"Mahayana",
"Mahindra",
"Mail",
"Maine Coon",
"Maize",
"Major League Baseball",
"Make-up artist",
"Malaria",
"Mammal",
"Man",
"Man of Steel (film)",
"Management",
"Manganese",
"Manicure",
"Mansion",
"Manufacturing",
"Mao Zedong",
"Map",
"Marathon",
"Marching band",
"Marco Polo",
"Marduk (band)",
"Marie Curie",
"Marine aquarium",
"Mark Twain",
"Mark Zuckerberg",
"Marketing",
"Marlboro (cigarette)",
"Marriage",
"Mars",
"Martial arts",
"Martin Luther",
"Maruti Suzuki",
"Marvel Comics",
"Marvel's The Avengers (film)",
"Mary Wollstonecraft",
"Masayoshi Son",
"Maserati",
"Mashed potato",
"Masonry",
"Mass",
"Mass media",
"Master of Business Administration",
"Materials",
"Mathematical analysis",
"Mathematical proof",
"Mathematician",
"Mathematics",
"Matter",
"Maya civilization",
"Mazda",
"McDonald's",
"McLaren",
"Measurement",
"Meat",
"Meatloaf",
"Mecca",
"Mechanic",
"Mechanical engineering",
"Media and communication",
"Medical imaging",
"Medical school",
"Medication",
"Medicine",
"Meditation",
"Mediterranean Sea",
"Mediterranean cuisine",
"MeeGo",
"Meitnerium",
"Memory",
"Mendelevium",
"Mental disorder",
"Mental health",
"Mercedes-Benz",
"Mercedes-Benz S-Class",
"Mermaid",
"Mesoamerica",
"Mesopotamia",
"Metabolism",
"Metal",
"Metal Gear Solid",
"Metallica",
"Metallurgy",
"Metaphysics",
"Metropolitan Museum of Art",
"Mexico",
"Mexico City",
"Miami",
"Michael Bloomberg",
"Michael Dell",
"Michael Faraday",
"Michael Jackson",
"Michael Phelps",
"Michelangelo",
"Mick Jagger",
"Micro Focus International Plc",
"Micropterus",
"Microscope",
"Microsoft",
"Microsoft Corp",
"Middle Ages",
"Middle East",
"Middlesex (novel)",
"Migraine",
"Miguel de Cervantes",
"Mike Trout",
"Mile run",
"Mileena",
"Miley Cyrus",
"Military",
"Military history",
"Milk",
"Milk allergy",
"Milkshake",
"Milky Way",
"Mind",
"Minecraft",
"Mineral",
"Minh Mang",
"Minimum wage",
"Mining",
"Minnesota Timberwolves",
"Miranda Lambert",
"Miss USA",
"Mississippi River",
"Mitsubishi",
"Mitsubishi Corp",
"Mobile phone",
"Modernism",
"Molecular biology",
"Molecule",
"Molly Ringwald",
"Molybdenum",
"Momentum",
"Monarchy",
"Money",
"Mongol Empire",
"Monkey",
"Moon",
"Moped",
"Morning sickness",
"Moscovium",
"Moscow",
"Moses",
"Motion",
"Motion Industries",
"Motorcycle",
"Motorcycle club",
"Motorola Solutions Inc",
"Mount Kilimanjaro",
"Mountain",
"Mountain Dew",
"Mountain bike",
"Mountaineering",
"Moustache",
"Muffin",
"Mughal Empire",
"Muhammad",
"Muhammad ibn Musa al-Khwarizmi",
"Mukesh Ambani",
"Multilingualism",
"Mumbai",
"Murasaki Shikibu",
"Muscle",
"Muse (band)",
"Museum",
"Museum of Modern Art",
"Mushroom",
"Music",
"Musical genre",
"Musical instrument",
"My Little Pony",
"My Little Pony Friendship Is Magic fandom",
"My Tho",
"Myanmar",
"Mystery film",
"Myth",
"NASA",
"NASCAR",
"NATO",
"NCR Corp",
"NEXON",
"Nachos",
"Nail art",
"Nam Dinh",
"Napoleon",
"Narcissism",
"Narcissus (plant)",
"Nashville, Tennessee",
"National Basketball Association",
"National Football League",
"National Guard of the United States",
"National Hockey League",
"Nationalism",
"Natural gas",
"Natural number",
"Natural phenomenon",
"Natural rubber",
"Natural satellite",
"Natural selection",
"Nature",
"Navigation",
"Navigation and timekeeping",
"Navy",
"Near-death experience",
"Near-sightedness",
"Neil deGrasse Tyson",
"Nelson Mandela",
"Neodymium",
"Neolithic Revolution",
"Neon",
"Neptune",
"Neptunium",
"Nervous system",
"Netflix",
"Neutron",
"Nevada",
"New Age",
"New England",
"New Hampshire",
"New Mexico",
"New York City",
"New York University",
"New York-style pizza",
"New religious movement",
"News",
"Newspaper",
"Newton's laws of motion",
"Ng Man-tat",
"Ngo Dinh Diem",
"Ngo Quyen",
"Nguyen Huu Tho",
"Nguyen Khanh",
"Nguyen Minh Triet",
"Nguyen Phu Trong",
"Nguyen Van Linh",
"Nguyen Van Thieu",
"Nha Trang",
"Nicholas Sparks",
"Nickel",
"Nicolaus Copernicus",
"Niels Bohr",
"Nigeria",
"Night owl",
"Nightclub",
"Nihonium",
"Nike, Inc.",
"Nikola Tesla",
"Nile",
"Nineteen Eighty-Four",
"Nintendo",
"Niobium",
"Nirvana (band)",
"Nissan",
"Nitrogen",
"Nobelium",
"Nojima Corp",
"Nokia Oyj",
"Nomura Research Institute",
"Nong Duc Manh",
"North America",
"North Dakota",
"NortonLifeLock Inc",
"Novel",
"Nuclear power",
"Nuclearweapon",
"Number",
"Number theory",
"Nursing",
"Nursing home care",
"Nut (fruit)",
"Nutrition",
"Oaksville, New York",
"Obesity",
"Obesity in the United States",
"Obsessive-compulsive disorder (OCD)",
"Ocean",
"Oceania",
"Oganesson",
"Ohio",
"Old age",
"Olympic Games",
"Olympic weightlifting",
"Omar (name)",
"One Direction",
"Onion",
"Online game",
"Only child",
"Ontology",
"Opah",
"Opel",
"Open relationship",
"Opera",
"Optical",
"Optics",
"Oral tradition",
"Orange juice",
"Orbit",
"Orc",
"Orchestra",
"Organic chemistry",
"Organic food",
"Organism",
"Oriental Shorthair",
"Origins of rock and roll",
"Orphan",
"Orphanage",
"Orthodontics",
"Osamu Tezuka",
"Osmium",
"Osteopathic medicine in the United States",
"Ottoman Empire",
"Outer space",
"Overview of gun laws by nation",
"Overwatch (video game)",
"Overweight",
"Ovo vegetarianism",
"Owner-occupancy",
"Oxygen",
"Ozark Trail (hiking trail)",
"Pablo Picasso",
"Pacific Crest Trail",
"Pacific Ocean",
"Paddleboarding",
"Page boy (wedding attendant)",
"Painting",
"Pakistan",
"Paleontology",
"Palladium",
"Pallonji Mistry",
"Palm OS",
"Palmistry",
"Palo Alto Networks Inc",
"Pancake",
"Paper",
"Parachuting",
"Parenting",
"Paris",
"Parisian cafe",
"Parrot",
"Parsons School of Design",
"Parti",
"Partnership",
"Party",
"Party City",
"Pattern hair loss",
"Paul the Apostle",
"PayPal Holdings Inc",
"Peace",
"Peanut",
"Peanut allergy",
"Pearl Jam",
"Pecan pie",
"Pediatrics",
"Peet's Coffee",
"People (magazine)",
"People for the Ethical Treatment of Animals",
"People watching",
"Pepsi",
"Perfectionism (psychology)",
"Performing arts",
"Periodic table",
"Personal name",
"Personality",
"Pet",
"Pet adoption",
"Petroleum",
"Peugeot",
"Peyton Manning",
"Phan Boi Chau",
"Phan Chau Trinh",
"Phan Khac Suu",
"Phan Thiet",
"Pharmacist",
"Phil Knight",
"Philip Larkin",
"Philippines",
"Philosophy",
"Philosophy of science",
"Phoenicia",
"Phonograph record",
"Phosphorus",
"Photography",
"Photon",
"Photosynthesis",
"Physical chemistry",
"Physical cosmology",
"Physical disability",
"Physics",
"Pi",
"Piaggi",
"Piano",
"Piccadilly Circus",
"Pickled cucumber",
"Pickling",
"Picnic",
"Pie",
"Pierre-Simon Laplace",
"Pig farming",
"Pink",
"Pipe smoking",
"Pit bull",
"Pita",
"Pittsburgh",
"Pittsburgh Steelers",
"Pizza",
"Pizza delivery",
"Planet",
"Plant",
"Plantation",
"Plastic",
"Platetectonics",
"Platinum",
"Plato",
"Play (activity)",
"Pleiku",
"Plutonium",
"Pneumonia",
"Poaching",
"Podcast",
"Poetry",
"Poland",
"Polaris Inc",
"Police",
"Police officer",
"Political party",
"Political science",
"Politician",
"Politics",
"Pollution",
"Polonium",
"Polyamory",
"Polydactyly",
"Polygon",
"Polyhedron",
"Poodle",
"Pop music",
"Popular culture",
"Pork",
"Porsche",
"Portland, Maine",
"Post-classical history",
"Potassium",
"Potato",
"Pottery",
"Poverty",
"Power (social and political)",
"Praseodymium",
"Prayer",
"Pre-Columbian era",
"Preacher",
"Pregnancy",
"Prehistoric art",
"Prehistory",
"Pretty Woman",
"Pride and Prejudice",
"Primate",
"Prime number",
"Prince (musician)",
"Prince Alwaleed Bin Talal Alsaud",
"Printing",
"Privacy",
"Probability",
"Professor",
"Promethium",
"Property",
"Prosus",
"Protactinium",
"Protein",
"Protestantism",
"Proton",
"Psychiatrist",
"Psychologist",
"Psychology",
"Pub",
"Public affairs industry",
"Public housing",
"Publishing",
"Pudding",
"Puerto Rico",
"Pug",
"Punk rock",
"PureOS",
"Purple",
"Puzzle",
"Qin Shi Huang",
"Qin Yinglin",
"Quake (video game)",
"Quang Ngai",
"Quantummechanics",
"Qui Nhon",
"Quilting",
"Quran",
"RNA",
"Rabindranath Tagore",
"Rach Gia",
"Racism",
"Racquetball",
"Radar",
"Radio",
"Radioactive decay",
"Radiohead",
"Radiology",
"Radium",
"Radon",
"Rafael Nadal",
"Ragini (actress)",
"Rail transport",
"Rain",
"Rainbow",
"Ramesses II",
"Rancid (band)",
"Rapping",
"Ravioli",
"React (JavaScript library)",
"Reading (process)",
"Real estate",
"Real estate broker",
"Real number",
"Real property",
"Reality television",
"Reason",
"Record producer",
"Recreational fishing",
"Recycling",
"Red Hot Chili Peppers",
"Red hair",
"Red wine",
"Reddit",
"Redox",
"Reformation",
"Refrigeration",
"Registered nurse",
"Regret",
"Religion",
"Religious music",
"Rembrandt",
"Renaissance",
"Renaissance fair",
"Renault Samsung",
"Rene Descartes",
"Renewable energy",
"Reproduction",
"Reptile",
"Retail",
"Retirement",
"Rhenium",
"Rhodium",
"Rice",
"Richard Wagner",
"Rick and Morty",
"Rise Against",
"Risk (game)",
"Rita Hayworth",
"Ritual",
"River",
"Road",
"Roald Amundsen",
"Robert De Niro",
"Robotics",
"Rock climbing",
"Rock music",
"Rock opera",
"Rocket",
"Rocky Mountains",
"Rodent",
"Roentgenium",
"Role-playing",
"Role-playing game",
"Roller coaster",
"Rolls-Royce Motor Cars",
"Romaine lettuce",
"Roman Abramovich",
"Romance (love)",
"Romanticism",
"Rome",
"Romeo and Juliet",
"Roofer",
"Rose",
"Rotisserie",
"Rubidium",
"Rugby football",
"Rum and Coke",
"Running",
"Rupert Murdoch",
"Rural area",
"Rush (band)",
"Russia",
"Ruthenium",
"Rutherfordium",
"S. Robson Walton",
"SAP SE",
"SK Holdings",
"Sa Dec",
"Sabre Corp",
"Sahara",
"Saliva",
"Salman bin Abdulaziz Al Saud",
"Salsa (dance)",
"Salt",
"Samarium",
"Samsung Electronics",
"Samsung Galaxy",
"San Antonio Spurs",
"Sanitation",
"Sanmina Corporation",
"Santa Fe, New Mexico",
"Santorini",
"Sao Paulo",
"Sappho",
"Sargon of Akkad",
"Satellite",
"Satoshi Nakamoto",
"Saturn",
"Saudi Arabia",
"Saxophone",
"Scandium",
"Scania",
"School",
"Science",
"Science fiction",
"Science, technology, engineering, and mathematics",
"Scientific Revolution",
"Scientific method",
"Scooby-Doo",
"Scooter (motorcycle)",
"Scotch whisky",
"Scramble for Africa",
"Scripps National Spelling Bee",
"Scuba diving",
"Sculpture",
"Sea",
"Seaborgium",
"Seafood",
"Sears",
"Season",
"Seat",
"Seattle",
"Secondary education",
"Secularism",
"Security guard",
"Seed",
"Selenium",
"Self-confidence",
"Self-consciousness",
"Semiconductor device",
"Sense",
"Sephora",
"Serge Dassault",
"Sergey Brin",
"ServiceNow Inc",
"Sewing",
"Sewing machine",
"Sex",
"Sex change",
"Sexism",
"Sexual orientation",
"Sexually transmitted infection",
"Shamanism",
"Shark",
"Shark attack",
"Shaun White",
"Shazam! (film)",
"Sheikh Khalifa Bin Zayed Al Nahyan",
"Sheikh Mansour bin Zayed Al Nahyan",
"Shellfish",
"Shen Kuo",
"Sherlock Holmes",
"Shia Islam",
"Shift work",
"Shinto",
"Ship",
"Shopping",
"Shopping addiction",
"Short story",
"Shortstop",
"Show tune",
"Shrimp",
"Shrimp and prawn as food",
"Siamese cat",
"Sibling",
"Sigmund Freud",
"Sikhism",
"Silicon",
"Silicon Valley (TV series)",
"Silk Road",
"Silver",
"Simon Bolivar",
"Simple machine",
"Singapore",
"Singing",
"Sino-Tibetan languages",
"Sir Evelyn De Rothschild",
"Sixteen Candles",
"Skateboarding",
"Skeleton",
"Skin",
"Skin care",
"Skoda",
"Skunk",
"Slacker",
"Slasher film",
"Slavery",
"Sleep",
"Sleeve tattoo",
"Small business",
"Smallpox",
"Smartphone",
"Smoking",
"Snake",
"Snapple",
"Snare drum",
"Sneakers",
"Snorkeling",
"Snow",
"Snowboarding",
"Social anxiety",
"Social anxiety disorder",
"Social class",
"Social equality",
"Social science",
"Socialism",
"Society",
"Sociology",
"Socrates",
"Sodium",
"Soft drink",
"Softball",
"Software engineering",
"Soil",
"Solar System",
"Solar eclipse",
"Solar energy",
"Solitude",
"Somniloquy",
"Sophocles",
"Sopra Steria Group SA",
"Soul",
"Sound",
"Soup kitchen",
"South Africa",
"South America",
"South Korea",
"South Park",
"Southern Baptist Convention",
"Southwest Airlines",
"Soviet Union",
"Soybean",
"Space",
"Space Center Houston",
"Space exploration",
"Space station",
"Spaceflight",
"Spaghetti alla puttanesca",
"Spaghetti with meatballs",
"Spain",
"Spanish Empire",
"Spanish language",
"Special education",
"Species",
"Speech",
"Speed of light",
"Spice",
"Spider",
"Spider-Man",
"Spider-Man Far From Home (film)",
"Spider-Man Homecoming (film)",
"Spirituality",
"Spitz",
"SpongeBob SquarePants",
"Sport",
"Sport of athletics",
"Sport utility vehicle",
"Sports car",
"Springfield, Missouri",
"Sprite (drink)",
"Squad (film)",
"Square Inc",
"Stamp Day for Superman (film)",
"Stamp collecting",
"Stand-up comedy",
"Standard Model",
"Stanford University",
"Star",
"Star Trek",
"Star Wars",
"StarCraft",
"Starbucks",
"State of matter",
"Statistics",
"Steak",
"Steam (software)",
"Steam engine",
"Steel",
"Stefan Persson",
"Stepfamily",
"Stepfather",
"Stephen Chow",
"Stephen Hawking",
"Stephen King",
"Stephen Schwarzman",
"Steve Ballmer",
"Stone Age",
"Stove",
"Strawberry",
"Street dance",
"String instrument",
"Stroke",
"Strong interaction",
"Strontium",
"Structures",
"Studio Ghibli",
"Subaru",
"Sugar",
"Suicide",
"Suleiman the Magnificent",
"Sulfur",
"Sumer",
"Summer camp",
"Sun",
"Sunday school",
"Sunni Islam",
"Sunrise",
"Sunset",
"Supergirl (film)",
"Superman (film)",
"Superman II (film)",
"Superman III (film)",
"Superman IV The Quest for Peace (film)",
"Superman Returns (film)",
"Superman and the Mole Men (film)",
"Supernova",
"Surfing",
"Surgeon",
"Surgery",
"Susanne Klatten",
"Sushi",
"Suzuki",
"Swamp Thing (film)",
"Sweden",
"Swimming",
"Swing (dance)",
"Symbian",
"Synopsys Inc",
"Syracuse",
"Syracuse, New York",
"System of a Down",
"TED (conference)",
"TTM Technologies Inc",
"TVS Motor Company",
"Taco",
"Tadashi Yanai",
"Tailgate party",
"Take-Two Interactive Software Inc",
"Take-out",
"Talent show",
"Talmud",
"Tang dynasty",
"Tantalum",
"Tanzania",
"Taoism",
"Tap dance",
"Tardiness",
"Taste",
"Tata Consultancy Services",
"Tata Motors",
"Tattoo",
"Tax",
"Taxicab",
"Tay Ninh",
"Taylor Swift",
"Tea",
"Teacher",
"Teapot",
"Tech Mahindra",
"Technetium",
"Technical drawing",
"Techno",
"Technology",
"Teenage pregnancy",
"Telecommunication",
"Telenovela",
"Telephone",
"Telescope",
"Television",
"Tellurium",
"Temperature",
"Tennessine",
"Tennis",
"Terbium",
"Terrestrial locomotion",
"Terrorism",
"Tesla Inc",
"Tex-Mex",
"Text messaging",
"Textile",
"Thai Binh",
"Thai Nguyen",
"Thailand",
"Thallium",
"Thanh Hoa",
"The Avett Brothers",
"The Batman (film)",
"The Beatles",
"The Chainsmokers",
"The Cheesecake Factory",
"The Chronicles of Thomas Covenant",
"The Dark Knight (film)",
"The Dark Knight Rises (film)",
"The Flintstones",
"The Hershey Company",
"The Hitchhiker's Guide to the Galaxy",
"The Humane Society of the United States",
"The Improv",
"The Incredible Hulk (film)",
"The Joe Rogan Experience",
"The Last of the Mohicans (1992 film)",
"The Little Mermaid (1989 film)",
"The Lord of the Rings",
"The New York Times",
"The New Yorker",
"The Pretenders",
"The Return of Swamp Thing (film)",
"The Rolling Stones",
"The Royal Ballet",
"The Sage Group Plc",
"The Simpsons",
"The Story So Far (band)",
"The Strokes",
"The Suicide Squad (film)",
"The Tale of Genji",
"The Technomancer",
"The Tonight Show Starring Jimmy Fallon",
"The Voice (U.S. TV series)",
"The arts",
"Theatre",
"Theocracy",
"Theory of relativity",
"Theravada",
"Thermodynamics",
"Thigh-high boots",
"Thomas Aquinas",
"Thomas Edison",
"Thomas Peterffy",
"Thor (film)",
"Thor Ragnarok (film)",
"Thorium",
"Thought",
"Thu Dau Mot",
"Thulium",
"Thursday",
"Tiger",
"Time",
"Time (magazine)",
"Tin",
"Tiny house movement",
"Titanic (1997 film)",
"Titanium",
"Tofas",
"Tofu",
"Toga party",
"Tokyo",
"Tom and Jerry",
"Tomato",
"Ton Duc Thang",
"Tool",
"Tool (band)",
"Tools and machinery",
"Top Chef",
"Topology",
"Tornado",
"Toronto Raptors",
"Toto (Oz)",
"Toto (band)",
"Tour de France",
"Tourism",
"Tourism in Italy",
"Tourism in Rome",
"Toy",
"Toyota",
"Toyota Industries",
"Toyota Prius",
"Track and field",
"Trade",
"Trade union",
"Traffic collision",
"Tran Anh Tong",
"Tran Dai Quang",
"Tran Du Tong",
"Tran Duc Luong",
"Tran Due Tong",
"Tran Hien Tong",
"Tran Nghe Tong",
"Tran Nhan Tong",
"Tran Phe De",
"Tran Thai Tong",
"Tran Thanh Tong",
"Tran Thieu De",
"Tran Thuan Tong",
"Tran Van Huong",
"Trance music",
"TransUnion",
"Translation",
"Transport",
"Transportation",
"Travel",
"Tree",
"Triangle",
"Trieu Viet Vuong",
"Trigonometry",
"Trophy",
"Tropical cyclone",
"Truck",
"Truck driver",
"True crime",
"Trumpet",
"Truong Chinh",
"Truong Tan Sang",
"Truth",
"Tu Duc",
"Tuberculosis",
"Tuesday",
"Tungsten",
"Tupac Shakur",
"Turkey",
"Tutor",
"Tuy Hoa",
"Twelfth grade",
"Twilight (novel series)",
"Two Steps from Hell",
"U2",
"UD Trucks",
"Uber Technologies Inc",
"Ubuntu",
"Ubuntu Budgie",
"Ubuntu Kylin",
"Ubuntu Server",
"Ubuntu Touch OS",
"Ultimate (sport)",
"Ultimate Fighting Championship",
"Ultra Music Festival",
"Underwater diving",
"Unemployment",
"Unicorn",
"Unicycle",
"Unieuro SpA",
"Unimicron Technology Corp",
"Union College",
"United Kingdom",
"United Nations",
"United Parcel Service",
"United States",
"United States Armed Forces",
"Universe",
"University",
"University of Alabama",
"University of Chicago",
"Unix",
"Upholstery",
"Uranium",
"Uranus",
"Urban agriculture",
"Us Weekly",
"VMware Inc",
"Vaccine",
"Vacuum",
"Vagit Alekperov",
"Valedictorian",
"Vampire",
"Van Halen",
"Vanadium",
"Vancouver Grizzlies",
"Vanilla",
"Vasco da Gama",
"Vauxhall",
"Vedas",
"Veganism",
"Vegetable",
"Vegetarianism",
"Venus",
"Vermont",
"Veterinary medicine",
"Veterinary physician",
"Victorian era",
"Video",
"Video game",
"Video game design",
"Vietnam",
"Vietnamese Pot-bellied",
"Vietnamese cuisine",
"Viking Age",
"Vikings",
"Vincent van Gogh",
"Vinh",
"Vinh Long",
"Violin",
"Violin technique",
"Virgil",
"Virginia",
"Virus",
"Visual acuity",
"Visual arts",
"Visual impairment",
"Vitamin C",
"Vladimir Lenin",
"Vladimir Lisin",
"Vladimir Potanin",
"Vladimir Putin",
"Vo Chi Cong",
"Volcano",
"Volkswagen",
"Volkswagen Passat",
"Voltaire",
"Volume",
"Volunteering",
"Volvo",
"Vung Tau",
"WPG Holdings",
"WWE",
"Wage slavery",
"Waiting staff",
"Wall Street",
"Walmart",
"Walt Disney",
"Walt Disney World",
"Wang Wei",
"War",
"Warren Buffett",
"Washington Nationals",
"Washington Wizards",
"Watchmen (film)",
"Water",
"Water skiing",
"Watercolor painting",
"Wave",
"Weak interaction",
"Wealth",
"Weapon",
"Weapons",
"Weather",
"Wedding cake",
"Ween",
"Weight loss",
"Weight training",
"Welder",
"Welfare",
"Welsh Corgi",
"Western imperialism in Asia",
"Western music (North America)",
"Western philosophy",
"Wheat",
"Wheel",
"Wheelchair",
"Whisky",
"Whittling",
"Who Wants to Be a Millionaire?",
"Whole Foods Market",
"Whole food",
"Widow",
"Wilderness",
"William Shakespeare",
"Wind",
"Wind power",
"Windows 10",
"Windows 7",
"Windows 8",
"Windows ARM",
"Windows Mobile",
"Windows Phone",
"Windows Vista",
"Windows XP",
"Wisconsin",
"Wolfgang Amadeus Mozart",
"Woman",
"Women's suffrage",
"Wonder Woman (film)",
"Wonder Woman 1984 (film)",
"Wood",
"Woodstock",
"Word",
"Work-life balance",
"Workday Inc",
"Workplace relationships",
"World Health Organization",
"World Trade Organization",
"World War I",
"World War II",
"World Wide Web",
"Writer",
"Writing",
"Wuling",
"Xbox",
"Xenon",
"Xiamen CD Inc.",
"Xiamen King Long",
"Xiaomi Corp",
"Xu Jiayin",
"Xubuntu",
"Yachting",
"Yang Huiyan",
"Yangtze",
"Year",
"Yellow",
"Yellowstone National Park",
"Yo Gotti",
"Yoga",
"Yoga as exercise",
"Yokai",
"Yorkshire Terrier",
"YouTube",
"Young Frankenstein",
"Ytterbium",
"Yttrium",
"Yutong",
"Yves Saint Laurent (brand)",
"Zebra",
"Zhang Yiming",
"Zhang Zhidong",
"Zheng He",
"Zhong Huijuan",
"Zhong Shanshan",
"Zinc",
"Zirconium",
"Zoology",
"Zumba",
"e",
"nth root",
"salesforce.com inc",
"webOS"
] | ---
language:
- vi
tags:
- vietnamese
- topicifier
- multilingual
- tiny
license:
- mit
pipeline_tag: text-classification
widget:
- text: "Đam mê của tôi là nhiếp ảnh"
---
# distilbert-base-multilingual-cased-vietnamese-topicifier
## About
Fine-tuning from `distilbert-base-multilingual-cased` with a tiny dataset about Vietnamese topics.
## Usage
Try entering a message to predict what topic is being discussed. For example:
```
# Photography
Đam mê của tôi là nhiếp ảnh
# World War I
Bạn đã từng nghe về cuộc đại thế chiến ?
```
## Other
The model was fine-tuning with a tiny dataset, don't use it for a product. | 624 |
lannelin/bert-imdb-1hidden | [
"neg",
"pos"
] | ---
language:
- en
datasets:
- imdb
metrics:
- accuracy
---
# bert-imdb-1hidden
## Model description
A `bert-base-uncased` model was restricted to 1 hidden layer and
fine-tuned for sequence classification on the
imdb dataset loaded using the `datasets` library.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
pretrained = "lannelin/bert-imdb-1hidden"
tokenizer = AutoTokenizer.from_pretrained(pretrained)
model = AutoModelForSequenceClassification.from_pretrained(pretrained)
LABELS = ["negative", "positive"]
def get_sentiment(text: str):
inputs = tokenizer.encode_plus(text, return_tensors='pt')
output = model(**inputs)[0].squeeze()
return LABELS[(output.argmax())]
print(get_sentiment("What a terrible film!"))
```
#### Limitations and bias
No special consideration given to limitations and bias.
Any bias held by the imdb dataset may be reflected in the model's output.
## Training data
Initialised with [bert-base-uncased](https://huggingface.co/bert-base-uncased)
Fine tuned on [imdb](https://huggingface.co/datasets/imdb)
## Training procedure
The model was fine-tuned for 1 epoch with a batch size of 64,
a learning rate of 5e-5, and a maximum sequence length of 512.
## Eval results
Accuracy on imdb test set: 0.87132 | 1,355 |
m3hrdadfi/albert-fa-base-v2-sentiment-digikala | [
"no_idea",
"not_recommended",
"recommended"
] | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### Digikala
Digikala user comments provided by [Open Data Mining Program (ODMP)](https://www.digikala.com/opendata/). This dataset contains 62,321 user comments with three labels:
| Label | # |
|:---------------:|:------:|
| no_idea | 10394 |
| not_recommended | 15885 |
| recommended | 36042 |
**Download**
You can download the dataset from [here](https://www.digikala.com/opendata/)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| Digikala User Comments | 81.12 | 81.74 | 80.74 | - |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. | 2,583 |
projecte-aina/roberta-base-ca-cased-tc | [
"Medi ambient",
"Societat",
"Policial",
"Judicial",
"Empresa",
"Partits",
"Política",
"Successos",
"Salut",
"Infraestructures",
"Parlament",
"Música",
"Govern",
"Unió Europea",
"Economia",
"Mobilitat",
"Treball",
"Cultura",
"Educació"
] | ---
language:
- ca
tags:
- "catalan"
- "text classification"
- "tecla"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/tecla"
metrics:
- accuracy
model-index:
- name: roberta-base-ca-cased-tc
results:
- task:
type: text-classification
dataset:
name: tecla
type: projecte-aina/tecla
metrics:
- name: Accuracy
type: accuracy
value: 0.740388810634613
widget:
- text: "Els Pets presenten el seu nou treball al Palau Sant Jordi."
- text: "Els barcelonins incrementen un 23% l’ús del cotxe des de l’inici de la pandèmia."
- text: "Retards a quatre línies de Rodalies per una avaria entre Sants i plaça de Catalunya."
- text: "Majors de 60 anys i sanitaris començaran a rebre la tercera dosi de la vacuna covid els propers dies."
- text: "Els cinemes Verdi estrenen Verdi Classics, un nou canal de televisió."
---
# Catalan BERTa (RoBERTa-base) finetuned for Text Classification.
The **roberta-base-ca-cased-tc** is a Text Classification (TC) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).
## Datasets
We used the TC dataset in Catalan called [TeCla](https://huggingface.co/datasets/projecte-aina/viquiquad) for training and evaluation.
## Evaluation and results
We evaluated the _roberta-base-ca-cased-tc_ on the TeCla test set against standard multilingual and monolingual baselines:
| Model | TeCla (accuracy) |
| ------------|:-------------|
| roberta-base-ca-cased-tc | **74.04** |
| mBERT | 70.56 |
| XLM-RoBERTa | 71.68 |
| WikiBERT-ca | 73.22 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Citing
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
``` | 2,903 |
Aureliano/distilbert-base-uncased-if | [
"answer.v.01",
"ask.v.01",
"ask.v.02",
"blow.v.01",
"brandish.v.01",
"break.v.05",
"burn.v.01",
"buy.v.01",
"charge.v.17",
"choose.v.01",
"clean.v.01",
"climb.v.01",
"close.v.01",
"connect.v.01",
"consult.v.02",
"cut.v.01",
"dig.v.01",
"drink.v.01",
"drive.v.01",
"drop.v.01",
"eat.v.01",
"enter.v.01",
"examine.v.02",
"exit.v.01",
"fill.v.01",
"follow.v.01",
"give.v.03",
"hit.v.02",
"hit.v.03",
"insert.v.01",
"insert.v.02",
"inventory.v.01",
"jump.v.01",
"kill.v.01",
"lie_down.v.01",
"light_up.v.05",
"listen.v.01",
"look.v.01",
"lower.v.01",
"memorize.v.01",
"move.v.02",
"note.v.04",
"open.v.01",
"play.v.03",
"pour.v.01",
"pray.v.01",
"press.v.01",
"pull.v.04",
"push.v.01",
"put.v.01",
"raise.v.02",
"read.v.01",
"remove.v.01",
"repeat.v.01",
"rub.v.01",
"say.v.08",
"search.v.04",
"sequence.n.02",
"set.v.05",
"shake.v.01",
"shoot.v.01",
"show.v.01",
"sit_down.v.01",
"skid.v.04",
"sleep.v.01",
"smash.v.02",
"smell.v.01",
"stand.v.03",
"switch_off.v.01",
"switch_on.v.01",
"take.v.04",
"take_off.v.06",
"talk.v.02",
"tell.v.03",
"throw.v.01",
"touch.v.01",
"travel.v.01",
"turn.v.09",
"unknown",
"unlock.v.01",
"wait.v.01",
"wake_up.v.02",
"wear.v.02",
"write.v.07"
] | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (uncased) for Interactive Fiction
[`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) finetuned on a dataset of Interactive
Fiction commands.
Details on the datasets can be found [here](https://github.com/aporporato/jericho-corpora).
The resulting model scored an accuracy of 0.976253 on the WordNet task test set.
## How to use the discriminator in `transformers`
```python
import tensorflow as tf
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
discriminator = TFAutoModelForSequenceClassification.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenizer = AutoTokenizer.from_pretrained("Aureliano/distilbert-base-uncased-if")
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = discriminator.config.id2label[tf.math.argmax(prediction).numpy()]
print(text, ":", label) # take.v.04 -> "get into one's hands, take physically"
```
## How to use the discriminator in `transformers` on a custom dataset
(Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb)
```python
import math
import numpy as np
import tensorflow as tf
from datasets import load_metric, Dataset, DatasetDict
from transformers import TFAutoModel, TFAutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, create_optimizer
from transformers.keras_callbacks import KerasMetricCallback
# This example shows how this model can be used:
# you should finetune the model of your specific corpus if commands, bigger than this
dict_train = {
"idx": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18",
"19", "20"],
"sentence": ["e", "get pen", "drop book", "x paper", "i", "south", "get paper", "drop the pen", "x book",
"inventory", "n", "get the book", "drop paper", "look at Pen", "inv", "g", "s", "get sandwich",
"drop sandwich", "x sandwich", "agin"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04",
"drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02",
"inventory.v.01", "repeat.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "repeat.v.01"]
}
dict_val = {
"idx": ["0", "1", "2", "3", "4", "5"],
"sentence": ["w", "get shield", "drop sword", "x spikes", "i", "repeat"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01"]
}
raw_train_dataset = Dataset.from_dict(dict_train)
raw_val_dataset = Dataset.from_dict(dict_val)
raw_dataset = DatasetDict()
raw_dataset["train"] = raw_train_dataset
raw_dataset["val"] = raw_val_dataset
raw_dataset = raw_dataset.class_encode_column("label")
print(raw_dataset)
print(raw_dataset["train"].features)
print(raw_dataset["val"].features)
print(raw_dataset["train"][1])
label2id = {}
id2label = {}
for i, l in enumerate(raw_dataset["train"].features["label"].names):
label2id[l] = i
id2label[i] = l
discriminator = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased",
label2id=label2id,
id2label=id2label)
discriminator.distilbert = TFAutoModel.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenizer = AutoTokenizer.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenize_function = lambda example: tokenizer(example["sentence"], truncation=True)
pre_tokenizer_columns = set(raw_dataset["train"].features)
encoded_dataset = raw_dataset.map(tokenize_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
batch_size = len(encoded_dataset["train"])
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator
)
tf_validation_dataset = encoded_dataset["val"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_epochs = 20
batches_per_epoch = math.ceil(len(encoded_dataset["train"]) / batch_size)
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=2e-5, num_warmup_steps=total_train_steps // 5, num_train_steps=total_train_steps
)
metric = load_metric("accuracy")
def compute_metrics(eval_predictions):
logits, labels = eval_predictions
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_dataset)
callbacks = [metric_callback]
discriminator.compile(optimizer=optimizer, loss=loss, metrics=["sparse_categorical_accuracy"])
discriminator.fit(
tf_train_dataset,
epochs=num_epochs,
validation_data=tf_validation_dataset,
callbacks=callbacks
)
print("Evaluate on test data")
results = discriminator.evaluate(tf_validation_dataset)
print("test loss, test acc:", results)
text = "i"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'inventory.v.01' (-> "make or include in an itemized record or report"), but probably only with a better finetuning dataset
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'take.v.04' (-> "get into one's hands, take physically"), but probably only with a better finetuning dataset
text = "w"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'travel.v.01' (-> "change location; move, travel, or proceed, also metaphorically"), but probably only with a better finetuning dataset
```
## How to use in a Rasa pipeline
The model can integrated in a Rasa pipeline through
a [`LanguageModelFeaturizer`](https://rasa.com/docs/rasa/components#languagemodelfeaturizer)
```yaml
recipe: default.v1
language: en
pipeline:
# See https://rasa.com/docs/rasa/tuning-your-model for more information.
...
- name: "WhitespaceTokenizer"
...
- name: LanguageModelFeaturizer
model_name: "distilbert"
model_weights: "Aureliano/distilbert-base-uncased-if"
...
``` | 7,493 |
jkhan447/sarcasm-detection-RoBerta-base | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-RoBerta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-RoBerta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8207
- Accuracy: 0.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,150 |
ericntay/bert-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: bert-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.937
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-emotion
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.553 | 1.0 | 1600 | 0.2631 | 0.9255 |
| 0.161 | 2.0 | 3200 | 0.1582 | 0.937 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,634 |
Alireza1044/mobilebert_sst2 | [
"negative",
"positive"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9036697247706422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sst2
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1730
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,395 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.