modelId
stringlengths
6
107
label
list
readme
stringlengths
0
56.2k
readme_len
int64
0
56.2k
rohanrajpal/bert-base-codemixed-uncased-sentiment
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: - hi - en tags: - hi - en - codemix datasets: - SAIL 2017 --- # Model name ## Model description I took a bert-base-multilingual-cased model from huggingface and finetuned it on SAIL 2017 dataset. ## Intended uses & limitations #### How to use ```python # You can include sample code which will be formatted #Coming soon! ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data I trained on the SAIL 2017 dataset [link](http://amitavadas.com/SAIL/Data/SAIL_2017.zip) on this [pretrained model](https://huggingface.co/bert-base-multilingual-cased). ## Training procedure No preprocessing. ## Eval results ### BibTeX entry and citation info ```bibtex @inproceedings{khanuja-etal-2020-gluecos, title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}", author = "Khanuja, Simran and Dandapat, Sandipan and Srinivasan, Anirudh and Sitaram, Sunayana and Choudhury, Monojit", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.329", pages = "3575--3585" } ```
1,326
Jeevesh8/std_0pnt2_bert_ft_cola-53
null
Entry not found
15
mujeensung/roberta-base_mnli_bc
[ "contradiction", "entailment", "neutral" ]
--- language: - en license: mit tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: roberta-base_mnli_bc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.9583768461882739 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_mnli_bc This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.2125 - Accuracy: 0.9584 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2015 | 1.0 | 16363 | 0.1820 | 0.9470 | | 0.1463 | 2.0 | 32726 | 0.1909 | 0.9559 | | 0.0768 | 3.0 | 49089 | 0.2117 | 0.9585 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.1+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
1,749
Jeevesh8/std_0pnt2_bert_ft_cola-54
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-55
null
Entry not found
15
razent/SciFive-base-Pubmed_PMC
null
--- language: - en tags: - token-classification - text-classification - question-answering - text2text-generation - text-generation datasets: - pubmed - pmc/open_access --- # SciFive Pubmed+PMC Base ## Introduction Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598) Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_ ## How to use For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-base-Pubmed_PMC") model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-base-Pubmed_PMC") ​ sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ." text = "ncbi_ner: " + sentence + " </s>" encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
1,404
has-abi/bert-finetuned-resumes-sections
[ "awards", "certificates", "contact/name/title", "education", "interests", "languages", "para", "professional_experiences", "projects", "skills", "soft_skills", "summary" ]
--- license: mit tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: bert-finetuned-resumes-sections results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-resumes-sections This model is a fine-tuned version of [dbmdz/bert-base-french-europeana-cased](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0333 - F1: 0.9548 - Roc Auc: 0.9732 - Accuracy: 0.9493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.1659 | 1.0 | 601 | 0.0645 | 0.9201 | 0.9434 | 0.8910 | | 0.055 | 2.0 | 1202 | 0.0426 | 0.9407 | 0.9633 | 0.9309 | | 0.0324 | 3.0 | 1803 | 0.0371 | 0.9450 | 0.9663 | 0.9368 | | 0.0226 | 4.0 | 2404 | 0.0389 | 0.9402 | 0.9651 | 0.9343 | | 0.0125 | 5.0 | 3005 | 0.0354 | 0.9433 | 0.9650 | 0.9343 | | 0.0091 | 6.0 | 3606 | 0.0364 | 0.9482 | 0.9696 | 0.9434 | | 0.0075 | 7.0 | 4207 | 0.0363 | 0.9464 | 0.9676 | 0.9393 | | 0.007 | 8.0 | 4808 | 0.0333 | 0.9548 | 0.9732 | 0.9493 | | 0.0063 | 9.0 | 5409 | 0.0358 | 0.9501 | 0.9698 | 0.9434 | | 0.0043 | 10.0 | 6010 | 0.0380 | 0.9475 | 0.9707 | 0.9443 | | 0.0032 | 11.0 | 6611 | 0.0377 | 0.9491 | 0.9712 | 0.9468 | | 0.0031 | 12.0 | 7212 | 0.0375 | 0.9500 | 0.9716 | 0.9459 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
2,374
jy46604790/Fake-News-Bert-Detect
null
--- license: apache-2.0 --- # Fake News Recognition ## Overview This model is trained by over 40,000 news from different medias based on the 'roberta-base'. It can give result by simply entering the text of the news less than 500 words(the excess will be truncated automatically). LABEL_0: Fake news LABEL_1: Real news ## Qucik Tutorial ### Download The Model ```python from transformers import pipeline MODEL = "jy46604790/Fake-News-Bert-Detect" clf = pipeline("text-classification", model=MODEL, tokenizer=MODEL) ``` ### Feed Data ```python text = "Indonesian police have recaptured a U.S. citizen who escaped a week ago from an overcrowded prison on the holiday island of Bali, the jail s second breakout of foreign inmates this year. Cristian Beasley from California was rearrested on Sunday, Badung Police chief Yudith Satria Hananta said, without providing further details. Beasley was a suspect in crimes related to narcotics but had not been sentenced when he escaped from Kerobokan prison in Bali last week. The 32-year-old is believed to have cut through bars in the ceiling of his cell before scaling a perimeter wall of the prison in an area being refurbished. The Kerobokan prison, about 10 km (six miles) from the main tourist beaches in the Kuta area, often holds foreigners facing drug-related charges. Representatives of Beasley could not immediately be reached for comment. In June, an Australian, a Bulgarian, an Indian and a Malaysian tunneled to freedom about 12 meters (13 yards) under Kerobokan prison s walls. The Indian and the Bulgarian were caught soon after in neighboring East Timor, but Australian Shaun Edward Davidson and Malaysian Tee Kok King remain at large. Davidson has taunted authorities by saying he was enjoying life in various parts of the world, in purported posts on Facebook. Kerobokan has housed a number of well-known foreign drug convicts, including Australian Schappelle Corby, whose 12-1/2-year sentence for marijuana smuggling got huge media attention." ``` ### Result ```python result = clf(text) result ``` output:[{'label': 'LABEL_1', 'score': 0.9994995594024658}]
2,136
Jeevesh8/std_0pnt2_bert_ft_cola-56
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-57
null
Entry not found
15
sismetanin/rubert-toxic-pikabu-2ch
null
--- language: - ru tags: - toxic comments classification --- ## RuBERT-Toxic RuBERT-Toxic is a [RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased) model fine-tuned on [Kaggle Russian Language Toxic Comments Dataset](https://www.kaggle.com/blackmoon/russian-language-toxic-comments). You can find a detailed description of the data used and the fine-tuning process in [this article](http://doi.org/10.28995/2075-7182-2020-19-1149-1159). You can also find this information at [GitHub](https://github.com/sismetanin/toxic-comments-detection-in-russian). | System | P | R | F<sub>1</sub> | | ------------- | ------------- | ------------- | ------------- | | MNB-Toxic | 87.01% | 81.22% | 83.21% | | M-BERT<sub>Base</sub>-Toxic | 91.19% | 91.10% | 91.15% | | <b>RuBERT-Toxic</b> | <b>91.91%</b> | <b>92.51%</b> | <b>92.20%</b> | | M-USE<sub>CNN</sub>-Toxic | 89.69% | 90.14% | 89.91% | | M-USE<sub>Trans</sub>-Toxic | 90.85% | 91.92% | 91.35% | We fine-tuned two versions of Multilingual Universal Sentence Encoder (M-USE), Multilingual Bidirectional Encoder Representations from Transformers (M-BERT) and RuBERT for toxic comments detection in Russian. Fine-tuned RuBERT-Toxic achieved F<sub>1</sub> = 92.20%, demonstrating the best classification score. ## Toxic Comments Dataset [Kaggle Russian Language Toxic Comments Dataset](https://www.kaggle.com/blackmoon/russian-language-toxic-comments) is the collection of Russian-language annotated comments from [2ch](https://2ch.hk/) and [Pikabu](https://pikabu.ru/), which was published on Kaggle in 2019. It consists of 14412 comments, where 4826 texts were labelled as toxic, and 9586 were labelled as non-toxic. The average length of comments is ~175 characters; the minimum length is 21, and the maximum is 7403. ## Citation If you find this repository helpful, feel free to cite our publication: ``` @INPROCEEDINGS{Smetanin2020Toxic, author={Sergey Smetanin}, booktitle={Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialogue 2020”}, title={Toxic Comments Detection in Russian}, year={2020}, doi={10.28995/2075-7182-2020-19-1149-1159} } ```
2,179
NDugar/ZSD-microsoft-v2xxlmnli
[ "CONTRADICTION", "NEUTRAL", "ENTAILMENT" ]
--- language: en tags: - deberta-v1 - deberta-mnli tasks: mnli thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit pipeline_tag: zero-shot-classification --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This is the DeBERTa large model fine-tuned with MNLI task. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | -------- #### Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp** ```bash cd transformers/examples/text-classification/ export TASK_NAME=mrpc python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\n--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\n--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
3,876
Raychanan/bert-base-chinese-FineTuned-Binary-Best
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-58
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-59
null
Entry not found
15
DaNLP/da-bert-tone-subjective-objective
[ "objective", "subjective" ]
--- language: - da tags: - bert - pytorch - subjectivity - objectivity license: cc-by-sa-4.0 datasets: - Twitter Sentiment - Europarl Sentiment widget: - text: Jeg tror alligvel, det bliver godt metrics: - f1 --- # Danish BERT Tone for the detection of subjectivity/objectivity The BERT Tone model detects whether a text (in Danish) is subjective or objective. The model is based on the finetuning of the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-tone) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("DaNLP/da-bert-tone-subjective-objective") tokenizer = BertTokenizer.from_pretrained("DaNLP/da-bert-tone-subjective-objective") ``` ## Training data The data used for training come from the [Twitter Sentiment](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#twitsent) and [EuroParl sentiment 2](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#europarl-sentiment2) datasets.
1,224
barissayil/bert-sentiment-analysis-sst
null
Entry not found
15
cardiffnlp/twitter-roberta-base-emoji
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
# Twitter-roBERTa-base for Emoji prediction This is a roBERTa-base model trained on ~58M tweets and finetuned for emoji prediction with the TweetEval benchmark. - Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). - Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval). ## Example of classification ```python from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax import csv import urllib.request # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) # Tasks: # emoji, emotion, hate, irony, offensive, sentiment # stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary task='emoji' MODEL = f"cardiffnlp/twitter-roberta-base-{task}" tokenizer = AutoTokenizer.from_pretrained(MODEL) # download label mapping labels=[] mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt" with urllib.request.urlopen(mapping_link) as f: html = f.read().decode('utf-8').split("\n") csvreader = csv.reader(html, delimiter='\t') labels = [row[1] for row in csvreader if len(row) > 1] # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) model.save_pretrained(MODEL) text = "Looking forward to Christmas" text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # # TF # model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) # model.save_pretrained(MODEL) # text = "Looking forward to Christmas" # text = preprocess(text) # encoded_input = tokenizer(text, return_tensors='tf') # output = model(encoded_input) # scores = output[0][0].numpy() # scores = softmax(scores) ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = labels[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` Output: ``` 1) 🎄 0.5457 2) 😊 0.1417 3) 😁 0.0649 4) 😍 0.0395 5) ❤️ 0.03 6) 😜 0.028 7) ✨ 0.0263 8) 😉 0.0237 9) 😂 0.0177 10) 😎 0.0166 11) 😘 0.0143 12) 💕 0.014 13) 💙 0.0076 14) 💜 0.0068 15) 🔥 0.0065 16) 💯 0.004 17) 🇺🇸 0.0037 18) 📷 0.0034 19) ☀ 0.0033 20) 📸 0.0021 ```
2,625
Jeevesh8/std_0pnt2_bert_ft_cola-60
null
Entry not found
15
DaNLP/da-bert-hatespeech-detection
[ "not offensive", "offensive" ]
--- language: - da tags: - bert - pytorch - hatespeech license: cc-by-sa-4.0 datasets: - social media metrics: - f1 widget: - text: "Senile gamle idiot" --- # Danish BERT for hate speech (offensive language) detection The BERT HateSpeech model detects whether a Danish text is offensive or not. It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("DaNLP/da-bert-hatespeech-detection") tokenizer = BertTokenizer.from_pretrained("DaNLP/da-bert-hatespeech-detection") ``` ## Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
1,049
Jeevesh8/std_0pnt2_bert_ft_cola-61
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-63
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-64
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-62
null
Entry not found
15
abhishek/autonlp-japanese-sentiment-59363
[ "negative", "positive" ]
--- tags: autonlp language: ja widget: - text: "🤗AutoNLPが大好きです" datasets: - abhishek/autonlp-data-japanese-sentiment --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 59363 ## Validation Metrics - Loss: 0.12651239335536957 - Accuracy: 0.9532079853817648 - Precision: 0.9729688278823665 - Recall: 0.9744633462616643 - AUC: 0.9717333684823413 - F1: 0.9737155136027014 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-japanese-sentiment-59363 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-japanese-sentiment-59363", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-japanese-sentiment-59363", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,094
ShreyaR/finetuned-roberta-depression
null
--- license: mit tags: - generated_from_trainer widget: - text: "I feel so low and numb, don't feel like doing anything. Just passing my days" - text: "Sleep is my greatest and most comforting escape whenever I wake up these days. The literal very first emotion I feel is just misery and reminding myself of all my problems." - text: "I went to a movie today. It was below my expectations but the day was fine." - text: "The first day of work was a little hectic but met pretty good colleagues, we went for a team dinner party at the end of the day." metrics: - accuracy model-index: - name: finetuned-roberta-depression results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-roberta-depression This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1385 - Accuracy: 0.9745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0238 | 1.0 | 625 | 0.1385 | 0.9745 | | 0.0333 | 2.0 | 1250 | 0.1385 | 0.9745 | | 0.0263 | 3.0 | 1875 | 0.1385 | 0.9745 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
1,961
Jeevesh8/std_0pnt2_bert_ft_cola-66
null
Entry not found
15
gchhablani/bert-base-cased-finetuned-mrpc
[ "equivalent", "not_equivalent" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - accuracy - f1 model-index: - name: bert-base-cased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8602941176470589 - name: F1 type: f1 value: 0.9025641025641027 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-mrpc This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.7132 - Accuracy: 0.8603 - F1: 0.9026 - Combined Score: 0.8814 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name mrpc \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir bert-base-cased-finetuned-mrpc \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.5981 | 1.0 | 230 | 0.4580 | 0.7892 | 0.8562 | 0.8227 | | 0.3739 | 2.0 | 460 | 0.3806 | 0.8480 | 0.8942 | 0.8711 | | 0.1991 | 3.0 | 690 | 0.4879 | 0.8529 | 0.8958 | 0.8744 | | 0.1286 | 4.0 | 920 | 0.6342 | 0.8529 | 0.8986 | 0.8758 | | 0.0812 | 5.0 | 1150 | 0.7132 | 0.8603 | 0.9026 | 0.8814 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
3,053
cross-encoder/ms-marco-TinyBERT-L-4
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
3,233
lordtt13/emo-mobilebert
[ "angry", "happy", "others", "sad" ]
--- language: en datasets: - emo --- ## Emo-MobileBERT: a thin version of BERT LARGE, trained on the EmoContext Dataset from scratch ### Details of MobileBERT The **MobileBERT** model was presented in [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by *Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, Denny Zhou* and here is the abstract: Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90.0/79.2 (1.5/2.1 higher than BERT_BASE). ### Details of the downstream task (Emotion Recognition) - Dataset 📚 SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text In this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes: - sad 😢 - happy 😃 - angry 😡 - others ### Model training The training script is present [here](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/emo-mobilebert.ipynb). ### Pipelining the Model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("lordtt13/emo-mobilebert") model = AutoModelForSequenceClassification.from_pretrained("lordtt13/emo-mobilebert") nlp_sentence_classif = transformers.pipeline('sentiment-analysis', model = model, tokenizer = tokenizer) nlp_sentence_classif("I've never had such a bad day in my life") # Output: [{'label': 'sad', 'score': 0.93153977394104}] ``` > Created by [Tanmay Thakur](https://github.com/lordtt13) | [LinkedIn](https://www.linkedin.com/in/tanmay-thakur-6bb5a9154/)
2,933
textattack/roberta-base-RTE
null
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.7942238267148014, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
618
daveni/twitter-xlm-roberta-emotion-es
[ "anger", "disgust", "fear", "joy", "others", "sadness", "surprise" ]
--- language: - es tags: - Emotion Analysis --- **Note**: This model & model card are based on the [finetuned XLM-T for Sentiment Analysis](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) # twitter-XLM-roBERTa-base for Emotion Analysis This is a XLM-roBERTa-base model trained on ~198M tweets and finetuned for emotion analysis on Spanish language. This model was presented to EmoEvalEs competition, part of [IberLEF 2021 Conference](https://sites.google.com/view/iberlef2021/), where the proposed task was the classification of Spanish tweets between seven different classes: *anger*, *disgust*, *fear*, *joy*, *sadness*, *surprise*, and *other*. We achieved the first position in the competition with a macro-averaged F1 score of 71.70%. - [Our code for EmoEvalEs submission](https://github.com/gsi-upm/emoevales-iberlef2021). - [EmoEvalEs Dataset](https://github.com/pendrag/EmoEvalEs) ## Example Pipeline with a [Tweet from @JaSantaolalla](https://twitter.com/JaSantaolalla/status/1398383243645177860) ```python from transformers import pipeline model_path = "daveni/twitter-xlm-roberta-emotion-es" emotion_analysis = pipeline("text-classification", framework="pt", model=model_path, tokenizer=model_path) emotion_analysis("Einstein dijo: Solo hay dos cosas infinitas, el universo y los pinches anuncios de bitcoin en Twitter. Paren ya carajo aaaaaaghhgggghhh me quiero murir") ``` ``` [{'label': 'anger', 'score': 0.48307016491889954}] ``` ## Full classification example ```python from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer, AutoConfig import numpy as np from scipy.special import softmax # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) model_path = "daveni/twitter-xlm-roberta-emotion-es" tokenizer = AutoTokenizer.from_pretrained(model_path ) config = AutoConfig.from_pretrained(model_path ) # PT model = AutoModelForSequenceClassification.from_pretrained(model_path ) text = "Se ha quedao bonito día para publicar vídeo, ¿no? Hoy del tema más diferente que hemos tocado en el canal." text = preprocess(text) print(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # Print labels and scores ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = config.id2label[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` Output: ``` Se ha quedao bonito día para publicar vídeo, ¿no? Hoy del tema más diferente que hemos tocado en el canal. 1) joy 0.7887 2) others 0.1679 3) surprise 0.0152 4) sadness 0.0145 5) anger 0.0077 6) disgust 0.0033 7) fear 0.0027 ``` #### Limitations and bias - The dataset we used for finetuning was unbalanced, where almost half of the records belonged to the *other* class so there might be bias towards this class. ## Training data Pretrained weights were left identical to the original model released by [cardiffnlp](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base). We used the [EmoEvalEs Dataset](https://github.com/pendrag/EmoEvalEs) for finetuning. ### BibTeX entry and citation info ```bibtex @inproceedings{vera2021gsi, title={GSI-UPM at IberLEF2021: Emotion Analysis of Spanish Tweets by Fine-tuning the XLM-RoBERTa Language Model}, author={Vera, D and Araque, O and Iglesias, CA}, booktitle={Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2021). CEUR Workshop Proceedings, CEUR-WS, M{\'a}laga, Spain}, year={2021} } ```
3,814
manishiitg/distilbert-resume-parts-classify
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-65
null
Entry not found
15
cross-encoder/quora-roberta-base
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')]) ``` You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
1,070
Jeevesh8/std_0pnt2_bert_ft_cola-68
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-67
null
Entry not found
15
symanto/xlm-roberta-base-snli-mnli-anli-xnli
[ "ENTAILMENT", "NEUTRAL", "CONTRADICTION" ]
--- language: - ar - bg - de - el - en - es - fr - ru - th - tr - ur - vn - zh datasets: - SNLI - MNLI - ANLI - XNLI tags: - zero-shot-classification --- A cross-attention NLI model trained for zero-shot and few-shot text classification. The base model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base), trained with the code from [here](https://github.com/facebookresearch/anli); on [SNLI](https://nlp.stanford.edu/projects/snli/), [MNLI](https://cims.nyu.edu/~sbowman/multinli/), [ANLI](https://github.com/facebookresearch/anli) and [XNLI](https://github.com/facebookresearch/XNLI). Usage: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch import numpy as np model = AutoModelForSequenceClassification.from_pretrained("symanto/xlm-roberta-base-snli-mnli-anli-xnli") tokenizer = AutoTokenizer.from_pretrained("symanto/xlm-roberta-base-snli-mnli-anli-xnli") input_pairs = [ ("I like this pizza.", "The sentence is positive."), ("I like this pizza.", "The sentence is negative."), ("I mag diese Pizza.", "Der Satz ist positiv."), ("I mag diese Pizza.", "Der Satz ist negativ."), ("Me gusta esta pizza.", "Esta frase es positivo."), ("Me gusta esta pizza.", "Esta frase es negativo."), ] inputs = tokenizer(input_pairs, truncation="only_first", return_tensors="pt", padding=True) logits = model(**inputs).logits probs = torch.softmax(logits, dim=1) probs = probs[..., [0]].tolist() print("probs", probs) np.testing.assert_almost_equal(probs, [[0.83], [0.04], [1.00], [0.00], [1.00], [0.00]], decimal=2) ```
1,690
Jeevesh8/std_0pnt2_bert_ft_cola-70
null
Entry not found
15
bhadresh-savani/albert-base-v2-emotion
[ "anger", "fear", "joy", "love", "sadness", "surprise" ]
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - emotion - pytorch license: apache-2.0 datasets: - emotion metrics: - Accuracy, F1 Score --- # Albert-base-v2-emotion ## Model description: [Albert](https://arxiv.org/pdf/1909.11942v6.pdf) is A Lite BERT architecture that has significantly fewer parameters than a traditional BERT architecture. [Albert-base-v2](https://huggingface.co/albert-base-v2) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters ``` learning rate 2e-5, batch size 64, num_train_epochs=8, ``` ## Model Performance Comparision on Emotion Dataset from Twitter: | Model | Accuracy | F1 Score | Test Sample per Second | | --- | --- | --- | --- | | [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 | | [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 | | [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 | | [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 | ## How to Use the model: ```python from transformers import pipeline classifier = pipeline("text-classification",model='bhadresh-savani/albert-base-v2-emotion', return_all_scores=True) prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", ) print(prediction) """ Output: [[ {'label': 'sadness', 'score': 0.010403595864772797}, {'label': 'joy', 'score': 0.8902180790901184}, {'label': 'love', 'score': 0.042532723397016525}, {'label': 'anger', 'score': 0.041297927498817444}, {'label': 'fear', 'score': 0.011772023513913155}, {'label': 'surprise', 'score': 0.0037756056990474463} ]] """ ``` ## Dataset: [Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion). ## Training procedure [Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb) ## Eval results ```json { 'test_accuracy': 0.936, 'test_f1': 0.9365658988006296, 'test_loss': 0.15278364717960358, 'test_runtime': 10.9413, 'test_samples_per_second': 182.794, 'test_steps_per_second': 2.925 } ``` ## Reference: * [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/)
2,634
jason9693/SoongsilBERT-base-beep
[ "hate", "none", "offensive" ]
--- language: ko widget: - text: "응 어쩔티비~" datasets: - kor_hate --- # Finetuning ## Result ### Base Model | | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :-------------------- | :---: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | KoBERT | 351M | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 | | XLM-Roberta-Base | 1.03G | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 | | HanBERT | 614M | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 | | KoELECTRA-Base-v3 | 431M | 90.63 | 88.11 | 84.45 | 82.24 | 85.53 | 95.25 | 84.83 / 93.45 | 67.61 | | Soongsil-BERT | 370M | **91.2** | - | - | - | 76 | 94 | - | **69** | ### Small Model | | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :--------------------- | :--: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | DistilKoBERT | 108M | 88.60 | 84.65 | 60.50 | 72.00 | 72.59 | 92.48 | 54.40 / 77.97 | 60.72 | | KoELECTRA-Small-v3 | 54M | 89.36 | 85.40 | 77.45 | 78.60 | 80.79 | 94.85 | 82.11 / 91.13 | 63.07 | | Soongsil-BERT | 213M | **90.7** | 84 | 69.1 | 76 | - | 92 | - | **66** | ## Reference - [Transformers Examples](https://github.com/huggingface/transformers/blob/master/examples/README.md) - [NSMC](https://github.com/e9t/nsmc) - [Naver NER Dataset](https://github.com/naver/nlp-challenge) - [PAWS](https://github.com/google-research-datasets/paws) - [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) - [Question Pair](https://github.com/songys/Question_pair) - [KorQuad](https://korquad.github.io/category/1.0_KOR.html) - [Korean Hate Speech](https://github.com/kocohub/korean-hate-speech) - [KoELECTRA](https://github.com/monologg/KoELECTRA) - [KoBERT](https://github.com/SKTBrain/KoBERT) - [HanBERT](https://github.com/tbai2019/HanBert-54k-N) - [HanBert Transformers](https://github.com/monologg/HanBert-Transformers)
3,679
Jeevesh8/std_0pnt2_bert_ft_cola-69
null
Entry not found
15
nateraw/bert-base-uncased-ag-news
[ "Business", "Sci/Tech", "Sports", "World" ]
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - ag_news - pytorch license: mit datasets: - ag_news metrics: - accuracy --- # bert-base-uncased-ag-news ## Model description `bert-base-uncased` finetuned on the AG News dataset using PyTorch Lightning. Sequence length 128, learning rate 2e-5, batch size 32, 4 T4 GPUs, 4 epochs. [The code can be found here](https://github.com/nateraw/hf-text-classification) #### Limitations and bias - Not the best model... ## Training data Data came from HuggingFace's `datasets` package. The data can be viewed [on nlp viewer](https://huggingface.co/nlp/viewer/?dataset=ag_news). ## Training procedure ... ## Eval results ...
787
Jeevesh8/std_0pnt2_bert_ft_cola-72
null
Entry not found
15
IlyaGusev/xlm_roberta_large_headline_cause_full
[ "bad", "same", "rel", "left_right_cause", "right_left_cause", "left_right_refute", "right_left_refute" ]
--- language: - ru - en tags: - xlm-roberta-large datasets: - IlyaGusev/headline_cause license: apache-2.0 widget: - text: "Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку" --- # XLM-RoBERTa HeadlineCause Full ## Model description This model was trained to predict the presence of causal relations between two headlines. This model is for the Full task with 7 possible labels: titles are almost the same, A causes B, B causes A, A refutes B, B refutes A, A linked with B in another way, A is not linked to B. English and Russian languages are supported. You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with ```</s>``` token. For example: ``` Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку ``` ## Intended uses & limitations #### How to use ```python from tqdm.notebook import tqdm from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline def get_batch(data, batch_size): start_index = 0 while start_index < len(data): end_index = start_index + batch_size batch = data[start_index:end_index] yield batch start_index = end_index def pipe_predict(data, pipe, batch_size=64): raw_preds = [] for batch in tqdm(get_batch(data, batch_size)): raw_preds += pipe(batch) return raw_preds MODEL_NAME = TOKENIZER_NAME = "IlyaGusev/xlm_roberta_large_headline_cause_full" tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME, do_lower_case=False) model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME) model.eval() pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, framework="pt", return_all_scores=True) texts = [ ( "Judge issues order to allow indoor worship in NC churches", "Some local churches resume indoor services after judge lifted NC governor’s restriction" ), ( "Gov. Kevin Stitt defends $2 million purchase of malaria drug touted by Trump", "Oklahoma spent $2 million on malaria drug touted by Trump" ), ( "Песков опроверг свой перевод на удаленку", "Дмитрий Песков перешел на удаленку" ) ] pipe_predict(texts, pipe) ``` #### Limitations and bias The models are intended to be used on news headlines. No other limitations are known. ## Training data * HuggingFace dataset: [IlyaGusev/headline_cause](https://huggingface.co/datasets/IlyaGusev/headline_cause) * GitHub: [IlyaGusev/HeadlineCause](https://github.com/IlyaGusev/HeadlineCause) ## Training procedure * Notebook: [HeadlineCause](https://colab.research.google.com/drive/1NAnD0OJ0TnYCJRsHpYUyYkjr_yi8ObcA) * Stand-alone script: [train.py](https://github.com/IlyaGusev/HeadlineCause/blob/main/headline_cause/train.py) ## Eval results Evaluation results can be found in the [arxiv paper](https://arxiv.org/pdf/2108.12626.pdf). ### BibTeX entry and citation info ```bibtex @misc{gusev2021headlinecause, title={HeadlineCause: A Dataset of News Headlines for Detecting Causalities}, author={Ilya Gusev and Alexey Tikhonov}, year={2021}, eprint={2108.12626}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
3,246
poom-sci/WangchanBERTa-finetuned-sentiment
[ "neg", "neu", "pos" ]
--- language: - th tags: - sentiment-analysis license: apache-2.0 datasets: - wongnai_reviews - wisesight_sentiment - generated_reviews_enth widget: - text: "โอโห้ ช่องนี้เปิดโลกเรามากเลยค่ะ คือตอนช่วงหาคำตอบเรานี่อึ้งไปเลย ดูจีเนียสมากๆๆ" example_title: "Positive" - text: "เริ่มจากชายเน็ตคนหนึ่งเปิดประเด็นว่าไปพบเจ้าจุดดำลึกลับนี้กลางมหาสมุทรใน Google Maps จนนำไปสู่การเสาะหาคำตอบ และพบว่าจริง ๆ แล้วมันคืออะไรกันแน่" example_title: "Neutral" - text: "ผมเป็นคนที่ไม่มีความสุขเลยจริงๆ" example_title: "Negative" --- Created only for study :)
551
HooshvareLab/bert-fa-base-uncased-sentiment-snappfood
[ "HAPPY", "SAD" ]
--- language: fa license: apache-2.0 --- # ParsBERT (v2.0) A Transformer-based Model for Persian Language Understanding We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### SnappFood [Snappfood](https://snappfood.ir/) (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification): 1. Happy 2. Sad | Label | # | |:--------:|:-----:| | Negative | 35000 | | Positive | 35000 | **Download** You can download the dataset from [here](https://drive.google.com/uc?id=15J4zPN1BD7Q_ZIQ39VeFquwSoW8qTxgu) ## Results The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures. | Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------:|:-----------:|:-----:|:-------------:| | SnappFood User Comments | 87.98 | 88.12* | 87.87 | - | ## How to use :hugs: | Task | Notebook | |---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Sentiment Analysis | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
2,650
cointegrated/rubert-tiny-bilingual-nli
[ "entailment", "not_entailment" ]
--- language: ru pipeline_tag: zero-shot-classification tags: - rubert - russian - nli - rte - zero-shot-classification widget: - text: "Сервис отстойный, кормили невкусно" candidate_labels: "Мне понравилось, Мне не понравилось" hypothesis_template: "{}." --- # RuBERT-tiny for NLI (natural language inference) This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned to predict the logical relationship between two short texts: entailment or not entailment. For more details, see the card for a related model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
633
razent/SciFive-large-Pubmed_PMC
null
--- language: - en tags: - token-classification - text-classification - question-answering - text2text-generation - text-generation datasets: - pubmed - pmc/open_access --- # SciFive Pubmed+PMC Large ## Introduction Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598) Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_ ## How to use For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-Pubmed_PMC") model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-Pubmed_PMC") ​ sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ." text = "ncbi_ner: " + sentence + " </s>" encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
1,408
twigs/cwi-regressor
[ "LABEL_0" ]
Entry not found
15
aychang/bert-base-cased-trec-coarse
[ "ABBR", "DESC", "ENTY", "HUM", "LOC", "NUM" ]
--- language: - en thumbnail: tags: - text-classification license: mit datasets: - trec metrics: --- # bert-base-cased trained on TREC 6-class task ## Model description A simple base BERT model trained on the "trec" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Use pipeline from transformers import pipeline model_name = "aychang/bert-base-cased-trec-coarse" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]) ``` ##### AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/bert-base-cased-trec-coarse" texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ``` #### Limitations and bias This is minimal language model trained on a benchmark dataset. ## Training data TREC https://huggingface.co/datasets/trec ## Training procedure Preprocessing, hardware used, hyperparameters... #### Hardware One V100 #### Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', num_train_epochs=2, per_device_train_batch_size=16, per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', save_steps=3000 ) ``` ## Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.974, 'eval_f1': array([0.98181818, 0.94444444, 1. , 0.99236641, 0.96995708, 0.98159509]), 'eval_loss': 0.138086199760437, 'eval_precision': array([0.98540146, 0.98837209, 1. , 0.98484848, 0.94166667, 0.97560976]), 'eval_recall': array([0.97826087, 0.90425532, 1. , 1. , 1. , 0.98765432]), 'eval_runtime': 1.6132, 'eval_samples_per_second': 309.943} ```
2,246
svalabs/gbert-large-zeroshot-nli
[ "contradiction", "entailment", "neutral" ]
--- language: German tags: - text-classification - pytorch - nli - de pipeline_tag: zero-shot-classification widget: - text: "Ich habe ein Problem mit meinem Iphone das so schnell wie möglich gelöst werden muss." candidate_labels: "Computer, Handy, Tablet, dringend, nicht dringend" hypothesis_template: "In diesem Satz geht es um das Thema {}." --- # SVALabs - Gbert Large Zeroshot Nli In this repository, we present our German zeroshot classification model. This model was trained on the basis of the German BERT large model from [deepset.ai](https://huggingface.co/deepset/gbert-large) and finetuned for natural language inference based on 847.862 machine-translated nli sentence pairs, using the [mnli](https://huggingface.co/datasets/multi_nli), [anli](https://huggingface.co/datasets/anli) and [snli](https://huggingface.co/datasets/snli) datasets. For this purpose, we translated the sentence pairs in these datasets to German. If you are a German speaker you may also have a look at our [Blog post](https://focus.sva.de/zeroshot-klassifikation/) about this model and about Zeroshot Classification. ### Model Details | | Description or Link | |---|---| |**Base model** | [```gbert-large```](https://huggingface.co/deepset/gbert-large) | |**Finetuning task**| Text Pair Classification / Natural Language Inference | |**Source datasets**| [```mnli```](https://huggingface.co/datasets/multi_nli); [```anli```](https://huggingface.co/datasets/anli); [```snli```](https://huggingface.co/datasets/snli) | ### Performance We evaluated our model for the nli task using the TEST set of the German part of the [xnli](https://huggingface.co/datasets/xnli) dataset. XNLI TEST-Set Accuracy: 85.6% ### Zeroshot Text Classification Task Benchmark We further tested our model for a zeroshot text classification task using a part of the [10kGNAD Dataset](https://tblock.github.io/10kGNAD/). Specifically, we used all articles that were labeled "Kultur", "Sport", "Web", "Wirtschaft" and "Wissenschaft". The next table shows the results as well as a comparison with other German language and multilanguage zeroshot options performing the same task: | Model | Accuracy | |:-------------------:|:------:| | Svalabs/gbert-large-zeroshot-nli | 0.81 | | Sahajtomar/German_Zeroshot | 0.76 | | Symanto/xlm-roberta-base-snli-mnli-anli-xnli | 0.16 | | Deepset/gbert-base | 0.65 | ### How to use The simplest way to use the model is the huggingface transformers pipeline tool. Just initialize the pipeline specifying the task as "zero-shot-classification" and select "svalabs/gbert-large-zeroshot-nli" as model. The model requires you to specify labels, a sequence (or list of sequences) to classify and a hypothesis template. In our tests, if the labels comprise only single words, "In diesem Satz geht es um das Thema {}" performed the best. However, for multiple words, especially when they combine nouns and verbs, simple hypothesis such as "Weil {}" or "Daher {}" may work better. Here is an example of how to use the model: ```python from transformers import pipeline zershot_pipeline = pipeline("zero-shot-classification", model="svalabs/gbert-large-zeroshot-nli") sequence = "Ich habe ein Problem mit meinem Iphone das so schnell wie möglich gelöst werden muss" labels = ["Computer", "Handy", "Tablet", "dringend", "nicht dringend"] hypothesis_template = "In diesem Satz geht es um das Thema {}." zershot_pipeline(sequence, labels, hypothesis_template=hypothesis_template) ``` ### Contact - Daniel Ehnes, daniel.ehnes@sva.de - Baran Avinc, baran.avinc@sva.de
3,670
Jeevesh8/std_0pnt2_bert_ft_cola-71
null
Entry not found
15
tomh/toxigen_roberta
null
--- language: - en tags: - text-classification --- Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar. This model comes from the paper [ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection](https://arxiv.org/abs/2203.09509) and can be used to detect implicit hate speech. Please visit the [Github Repository](https://github.com/microsoft/TOXIGEN) for the training dataset and further details. ```bibtex @inproceedings{hartvigsen2022toxigen, title = "{T}oxi{G}en: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection", author = "Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece", booktitle = "Proceedings of the 60th Annual Meeting of the Association of Computational Linguistics", year = "2022" } ```
904
lvwerra/bert-imdb
null
# BERT-IMDB ## What is it? BERT (`bert-large-cased`) trained for sentiment classification on the [IMDB dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). ## Training setting The model was trained on 80% of the IMDB dataset for sentiment classification for three epochs with a learning rate of `1e-5` with the `simpletransformers` library. The library uses a learning rate schedule. ## Result The model achieved 90% classification accuracy on the validation set. ## Reference The full experiment is available in the [tlr repo](https://lvwerra.github.io/trl/03-bert-imdb-training/).
619
Jeevesh8/std_0pnt2_bert_ft_cola-73
null
Entry not found
15
MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary
[ "entailment", "not_entailment" ]
--- language: - en tags: - text-classification - zero-shot-classification metrics: - accuracy datasets: - multi_nli - anli - fever - lingnli pipeline_tag: zero-shot-classification --- # DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary ## Model description This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli). Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". This is specifically designed for zero-shot classification, where the difference between "neutral" and "contradiction" is irrelevant. The base model is [DeBERTa-v3-xsmall from Microsoft](https://huggingface.co/microsoft/deberta-v3-xsmall). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543). ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "not_entailment"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli). ### Training procedure DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy. dataset | mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c --------|---------|----------|---------|----------|----------|------ accuracy | 0.925 | 0.922 | 0.892 | 0.676 | 0.665 | 0.888 speed (text/sec, CPU, 128 batch) | 6.0 | 6.3 | 3.0 | 5.8 | 5.0 | 7.6 speed (text/sec, GPU Tesla P100, 128 batch) | 473 | 487 | 230 | 390 | 340 | 586 ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.
4,328
abhishek/autonlp-bbc-news-classification-37229289
[ "business", "entertainment", "politics", "sport", "tech" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - abhishek/autonlp-data-bbc-news-classification co2_eq_emissions: 5.448567309047846 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 37229289 - CO2 Emissions (in grams): 5.448567309047846 ## Validation Metrics - Loss: 0.07081354409456253 - Accuracy: 0.9867109634551495 - Macro F1: 0.9859067529980614 - Micro F1: 0.9867109634551495 - Weighted F1: 0.9866417220968429 - Macro Precision: 0.9868771404595043 - Micro Precision: 0.9867109634551495 - Weighted Precision: 0.9869289511551576 - Macro Recall: 0.9853173241852486 - Micro Recall: 0.9867109634551495 - Weighted Recall: 0.9867109634551495 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-bbc-news-classification-37229289 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-bbc-news-classification-37229289", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-bbc-news-classification-37229289", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,425
Jeevesh8/std_0pnt2_bert_ft_cola-74
null
Entry not found
15
IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Sentiment
null
--- language: - zh license: apache-2.0 tags: - bert - NLU - Sentiment inference: true widget: - text: "今天心情不好" --- # Erlangshen-MegatronBert-1.3B-Semtiment, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). We collect 8 sentiment datasets in the Chinese domain for finetune, with a total of 227347 samples. Our model is mainly based on [MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B) ## Usage ```python from transformers import AutoModelForSequenceClassification from transformers import BertTokenizer import torch tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Sentiment') model=AutoModelForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Sentiment') text='今天心情不好' output=model(torch.tensor([tokenizer.encode(text)])) print(torch.nn.functional.softmax(output.logits,dim=-1)) ``` ## Scores on downstream chinese tasks | Model | ASAP-SENT | ASAP-ASPECT | ChnSentiCorp | | :--------: | :-----: | :----: | :-----: | | Erlangshen-Roberta-110M-Sentiment | 97.77 | 97.31 | 96.61 | | Erlangshen-Roberta-330M-Sentiment | 97.9 | 97.51 | 96.66 | | Erlangshen-MegatronBert-1.3B-Sentiment | 98.1 | 97.8 | 97 | ## Citation If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
1,582
boychaboy/SNLI_distilbert-base-cased
[ "contradiction", "entailment", "neutral" ]
Entry not found
15
Recognai/zeroshot_selectra_medium
[ "contradiction", "neutral", "entailment" ]
--- language: es tags: - zero-shot-classification - nli - pytorch datasets: - xnli pipeline_tag: zero-shot-classification license: apache-2.0 widget: - text: "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo" candidate_labels: "cultura, sociedad, economia, salud, deportes" --- # Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA *Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html). In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier. ## Usage ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="Recognai/zeroshot_selectra_medium") classifier( "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo", candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"], hypothesis_template="Este ejemplo es {}." ) """Output {'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo', 'labels': ['sociedad', 'cultura', 'economia', 'salud', 'deportes'], 'scores': [0.6450043320655823, 0.16710571944713593, 0.08507631719112396, 0.0759836807847023, 0.026829993352293968]} """ ``` The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.** ## Demo and tutorial If you want to see this model in action, we have created a basic tutorial using [Rubrix](https://www.rubrix.ml/), a free and open-source tool to *explore, annotate, and monitor data for NLP*. The tutorial shows you how to evaluate this classifier for news categorization in Spanish, and how it could be used to build a training set for training a supervised classifier (which might be useful if you want obtain more precise results or improve the model over time). You can [find the tutorial here](https://rubrix.readthedocs.io/en/master/tutorials/zeroshot_data_annotation.html). See the video below showing the predictions within the annotation process (see that the predictions are almost correct for every example). <video width="100%" controls><source src="https://github.com/recognai/rubrix-materials/raw/main/tutorials/videos/zeroshot_selectra_news_data_annotation.mp4" type="video/mp4"></video> ## Metrics | Model | Params | XNLI (acc) | \*MLSUM (acc) | | --- | --- | --- | --- | | [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 | | zs SELECTRA medium | 41M | **0.807** | **0.589** | | [zs SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small) | **22M** | 0.795 | 0.446 | \*evaluated with zero-shot learning (ZSL) - **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion. - **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb) ## Training Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details. ## Authors - David Fidalgo ([GitHub](https://github.com/dcfidalgo)) - Daniel Vila ([GitHub](https://github.com/dvsrepo)) - Francisco Aranda ([GitHub](https://github.com/frascuchon)) - Javier Lopez ([GitHub](https://github.com/javispp))
4,337
Jeevesh8/std_0pnt2_bert_ft_cola-75
null
Entry not found
15
cross-encoder/nli-deberta-base
[ "contradiction", "entailment", "neutral" ]
--- language: en pipeline_tag: zero-shot-classification tags: - deberta-base-base datasets: - multi_nli - snli metrics: - accuracy license: apache-2.0 --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-base') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-base') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-base') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-base') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
2,564
microsoft/DialogRPT-width
null
# Demo Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) | Context | Response | `width` score | | :------ | :------- | :------------: | | I love NLP! | Can anyone recommend a nice review paper? | 0.701 | | I love NLP! | Me too! | 0.029 | The `width` score predicts how likely the response is getting replied. # DialogRPT-width ### Dialog Ranking Pretrained Transformers > How likely a dialog response is upvoted 👍 and/or gets replied 💬? This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict. It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates. Quick Links: * [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/) * [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT) * [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) We considered the following tasks and provided corresponding pretrained models. |Task | Description | Pretrained model | | :------------- | :----------- | :-----------: | | **Human feedback** | **given a context and its two human responses, predict...**| | `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) | | `width`| ... which gets more direct replies? | this model | | `depth`| ... which gets longer follow-up thread? | [model card](https://huggingface.co/microsoft/DialogRPT-depth) | | **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** | | `human_vs_rand`| ... a random human response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-rand) | | `human_vs_machine`| ... a machine generated response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-machine) | ### Contact: Please create an issue on [our repo](https://github.com/golsun/DialogRPT) ### Citation: ``` @inproceedings{gao2020dialogrpt, title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data}, author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan}, year={2020}, booktitle={EMNLP} } ```
2,607
Maklygin/mBert-relation-extraction-FT
null
Entry not found
15
mmillet/distilrubert-tiny-2ndfinetune-epru
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: distilrubert-tiny-2ndfinetune-epru results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilrubert-tiny-2ndfinetune-epru This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2085 - Accuracy: 0.9333 - F1: 0.9319 - Precision: 0.9336 - Recall: 0.9333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.4825 | 1.0 | 13 | 0.2988 | 0.8848 | 0.8827 | 0.9056 | 0.8848 | | 0.2652 | 2.0 | 26 | 0.2435 | 0.9212 | 0.9216 | 0.9282 | 0.9212 | | 0.168 | 3.0 | 39 | 0.2120 | 0.9515 | 0.9501 | 0.9524 | 0.9515 | | 0.1593 | 4.0 | 52 | 0.1962 | 0.9333 | 0.9330 | 0.9366 | 0.9333 | | 0.1294 | 5.0 | 65 | 0.1855 | 0.9333 | 0.9334 | 0.9355 | 0.9333 | | 0.1065 | 6.0 | 78 | 0.1780 | 0.9394 | 0.9393 | 0.9399 | 0.9394 | | 0.0908 | 7.0 | 91 | 0.1967 | 0.9394 | 0.9388 | 0.9388 | 0.9394 | | 0.0432 | 8.0 | 104 | 0.2085 | 0.9333 | 0.9319 | 0.9336 | 0.9333 | ### Framework versions - Transformers 4.19.3 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
2,288
ipuneetrathore/bert-base-cased-finetuned-finBERT
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
## FinBERT Code for importing and using this model is available [here](https://github.com/ipuneetrathore/BERT_models)
119
Jeevesh8/std_0pnt2_bert_ft_cola-76
null
Entry not found
15
DTAI-KULeuven/robbert-v2-dutch-sentiment
[ "Negative", "Positive" ]
--- language: nl license: mit datasets: - dbrd model-index: - name: robbert-v2-dutch-sentiment results: - task: type: text-classification name: Text Classification dataset: name: dbrd type: sentiment-analysis split: test metrics: - name: Accuracy type: accuracy value: 0.93325 widget: - text: "Ik erken dat dit een boek is, daarmee is alles gezegd." - text: "Prachtig verhaal, heel mooi verteld en een verrassend einde... Een topper!" thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png" tags: - Dutch - Flemish - RoBERTa - RobBERT --- <p align="center"> <img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo_with_name.png" alt="RobBERT: A Dutch RoBERTa-based Language Model" width="75%"> </p> # RobBERT finetuned for sentiment analysis on DBRD This is a finetuned model based on [RobBERT (v2)](https://huggingface.co/pdelobelle/robbert-v2-dutch-base). We used [DBRD](https://huggingface.co/datasets/dbrd), which consists of book reviews from [hebban.nl](https://hebban.nl). Hence our example sentences about books. We did some limited experiments to test if this also works for other domains, but this was not exactly amazing. We released a distilled model and a `base`-sized model. Both models perform quite well, so there is only a slight performance tradeoff: | Model | Identifier | Layers | #Params. | Accuracy | |----------------|------------------------------------------------------------------------|--------|-----------|-----------| | RobBERT (v2) | [`DTAI-KULeuven/robbert-v2-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment) | 12 | 116 M |93.3* | | RobBERTje - Merged (p=0.5)| [`DTAI-KULeuven/robbertje-merged-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbertje-merged-dutch-sentiment) | 6 | 74 M |92.9 | *The results of RobBERT are of a different run than the one reported in the paper. # Training data and setup We used the [Dutch Book Reviews Dataset (DBRD)](https://huggingface.co/datasets/dbrd) from van der Burgh et al. (2019). Originally, these reviews got a five-star rating, but this has been converted to positive (⭐️⭐️⭐️⭐️ and ⭐️⭐️⭐️⭐️⭐️), neutral (⭐️⭐️⭐️) and negative (⭐️ and ⭐️⭐️). We used 19.5k reviews for the training set, 528 reviews for the validation set and 2224 to calculate the final accuracy. The validation set was used to evaluate a random hyperparameter search over the learning rate, weight decay and gradient accumulation steps. The full training details are available in [`training_args.bin`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment/blob/main/training_args.bin) as a binary PyTorch file. # Limitations and biases - The domain of the reviews is limited to book reviews. - Most authors of the book reviews were women, which could have caused [a difference in performance for reviews written by men and women](https://www.aclweb.org/anthology/2020.findings-emnlp.292). - This is _not_ the same model as we discussed in our paper, due to some conversion issues between the original training two years ago and now, it was easier to retrain this model. The accuracy is slightly lower, but the model was trained on the beginning of the reviews instead of the end of the reviews. ## Credits and citation This project is created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be) and [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/). If you would like to cite our paper or models, you can use the following BibTeX: ``` @inproceedings{delobelle2020robbert, title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel", author = "Delobelle, Pieter and Winters, Thomas and Berendt, Bettina", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292", doi = "10.18653/v1/2020.findings-emnlp.292", pages = "3255--3265" } ```
4,294
Jeevesh8/std_0pnt2_bert_ft_cola-77
null
Entry not found
15
cardiffnlp/bertweet-base-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
0
valhalla/bart-large-sst2
[ "NEGATIVE", "POSITIVE" ]
Entry not found
15
dmis-lab/biobert-large-cased-v1.1-mnli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
textattack/roberta-base-MRPC
null
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 256. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9117647058823529, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
618
mrm8488/bert-mini-finetuned-age_news-classification
[ "World", "Sports", "Business", "Sci/Tech" ]
--- language: en tags: - news - classification - mini datasets: - ag_news widget: - text: "Israel withdraws from Gaza camp Israel withdraws from Khan Younis refugee camp in the Gaza Strip, after a four-day operation that left 11 dead." --- # BERT-Mini fine-tuned on age_news dataset for news classification Test set accuray: 0.93
331
Jeevesh8/std_0pnt2_bert_ft_cola-78
null
Entry not found
15
Mithil/Bert
null
--- license: afl-3.0 ---
25
IMSyPP/hate_speech_en
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- widget: - text: "My name is Mark and I live in London. I am a postgraduate student at Queen Mary University." language: - en license: mit --- # Hate Speech Classifier for Social Media Content in English Language A monolingual model for hate speech classification of social media content in English language. The model was trained on 103190 YouTube comments and tested on an independent test set of 20554 YouTube comments. It is based on English BERT base pre-trained language model. ## Tokenizer During training the text was preprocessed using the original English BERT base tokenizer. We suggest the same tokenizer is used for inference. ## Model output The model classifies each input into one of four distinct classes: * 0 - acceptable * 1 - inappropriate * 2 - offensive * 3 - violent
802
snunlp/KR-FinBert-SC
[ "negative", "neutral", "positive" ]
--- language: - ko --- # KR-FinBert & KR-FinBert-SC Much progress has been made in the NLP (Natural Language Processing) field, with numerous studies showing that domain adaptation using small-scale corpus and fine-tuning with labeled data is effective for overall performance improvement. we proposed KR-FinBert for the financial domain by further pre-training it on a financial corpus and fine-tuning it for sentiment analysis. As many studies have shown, the performance improvement through adaptation and conducting the downstream task was also clear in this experiment. ![KR-FinBert](https://huggingface.co/snunlp/KR-FinBert/resolve/main/images/KR-FinBert.png) ## Data The training data for this model is expanded from those of **[KR-BERT-MEDIUM](https://huggingface.co/snunlp/KR-Medium)**, texts from Korean Wikipedia, general news articles, legal texts crawled from the National Law Information Center and [Korean Comments dataset](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments). For the transfer learning, **corporate related economic news articles from 72 media sources** such as the Financial Times, The Korean Economy Daily, etc and **analyst reports from 16 securities companies** such as Kiwoom Securities, Samsung Securities, etc are added. Included in the dataset is 440,067 news titles with their content and 11,237 analyst reports. **The total data size is about 13.22GB.** For mlm training, we split the data line by line and **the total no. of lines is 6,379,315.** KR-FinBert is trained for 5.5M steps with the maxlen of 512, training batch size of 32, and learning rate of 5e-5, taking 67.48 hours to train the model using NVIDIA TITAN XP. ## Downstream tasks ### Sentimental Classification model Downstream task performances with 50,000 labeled data. |Model|Accuracy| |-|-| |KR-FinBert|0.963| |KR-BERT-MEDIUM|0.958| |KcBert-large|0.955| |KcBert-base|0.953| |KoBert|0.817| ### Inference sample |Positive|Negative| |-|-| |현대바이오, '폴리탁셀' 코로나19 치료 가능성에 19% 급등 | 영화관株 '코로나 빙하기' 언제 끝나나…"CJ CGV 올 4000억 손실 날수도" | |이수화학, 3분기 영업익 176억…전년比 80%↑ | C쇼크에 멈춘 흑자비행…대한항공 1분기 영업적자 566억 | |"GKL, 7년 만에 두 자릿수 매출성장 예상" | '1000억대 횡령·배임' 최신원 회장 구속… SK네트웍스 "경영 공백 방지 최선" | |위지윅스튜디오, 콘텐츠 활약에 사상 첫 매출 1000억원 돌파 | 부품 공급 차질에…기아차 광주공장 전면 가동 중단 | |삼성전자, 2년 만에 인도 스마트폰 시장 점유율 1위 '왕좌 탈환' | 현대제철, 지난해 영업익 3,313억원···전년比 67.7% 감소 | ### Citation ``` @misc{kr-FinBert-SC, author = {Kim, Eunhee and Hyopil Shin}, title = {KR-FinBert: Fine-tuning KR-FinBert for Sentiment Analysis}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://huggingface.co/snunlp/KR-FinBert-SC}} } ```
2,669
Jeevesh8/std_0pnt2_bert_ft_cola-79
null
Entry not found
15
IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI
null
--- language: - zh license: apache-2.0 tags: - bert - NLU - NLI inference: true widget: - text: "今天心情不好[SEP]今天很开心" --- # Erlangshen-MegatronBert-1.3B-NLI, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). We collect 4 NLI(Natural Language Inference) datasets in the Chinese domain for finetune, with a total of 1014787 samples. Our model is mainly based on [roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext) ## Usage ```python from transformers import AutoModelForSequenceClassification from transformers import BertTokenizer import torch tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI') model=AutoModelForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI') texta='今天的饭不好吃' textb='今天心情不好' output=model(torch.tensor([tokenizer.encode(texta,textb)])) print(torch.nn.functional.softmax(output.logits,dim=-1)) ``` ## Scores on downstream chinese tasks (without any data augmentation) | Model | cmnli | ocnli | snli | | :--------: | :-----: | :----: | :-----: | | Erlangshen-Roberta-110M-NLI | 80.83 | 78.56 | 88.01 | | Erlangshen-Roberta-330M-NLI | 82.25 | 79.82 | 88 | | Erlangshen-MegatronBert-1.3B-NLI | 84.52 | 84.17 | 88.67 | ## Citation If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
1,596
cross-encoder/nli-MiniLM2-L6-H768
[ "contradiction", "entailment", "neutral" ]
--- language: en pipeline_tag: zero-shot-classification license: apache-2.0 tags: - MiniLMv2 datasets: - multi_nli - snli metrics: - accuracy --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-MiniLM2-L6-H768') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-MiniLM2-L6-H768') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-MiniLM2-L6-H768') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-MiniLM2-L6-H768') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
2,567
TransQuest/monotransquest-da-en_zh-wiki
[ "LABEL_0" ]
--- language: en-zh tags: - Quality Estimation - monotransquest - DA license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-en_zh-wiki", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
5,401
eleldar/language-detection
[ "ar", "bg", "de", "el", "en", "es", "fr", "hi", "it", "ja", "nl", "pl", "pt", "ru", "sw", "th", "tr", "ur", "vi", "zh" ]
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: xlm-roberta-base-language-detection results: [] --- # Clone from [https://huggingface.co/papluca/xlm-roberta-base-language-detection](xlm-roberta-base-language-detection) This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset. ## Model description This model is an XLM-RoBERTa transformer model with a classification head on top (i.e. a linear layer on top of the pooled output). For additional information please refer to the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) model card or to the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. ## Intended uses & limitations You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 20 languages: `arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)` ## Training and evaluation data The model was fine-tuned on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset, which consists of text sequences in 20 languages. The training set contains 70k samples, while the validation and test sets 10k each. The average accuracy on the test set is **99.6%** (this matches the average macro/weighted F1-score being the test set perfectly balanced). A more detailed evaluation is provided by the following table. | Language | Precision | Recall | F1-score | support | |:--------:|:---------:|:------:|:--------:|:-------:| |ar |0.998 |0.996 |0.997 |500 | |bg |0.998 |0.964 |0.981 |500 | |de |0.998 |0.996 |0.997 |500 | |el |0.996 |1.000 |0.998 |500 | |en |1.000 |1.000 |1.000 |500 | |es |0.967 |1.000 |0.983 |500 | |fr |1.000 |1.000 |1.000 |500 | |hi |0.994 |0.992 |0.993 |500 | |it |1.000 |0.992 |0.996 |500 | |ja |0.996 |0.996 |0.996 |500 | |nl |1.000 |1.000 |1.000 |500 | |pl |1.000 |1.000 |1.000 |500 | |pt |0.988 |1.000 |0.994 |500 | |ru |1.000 |0.994 |0.997 |500 | |sw |1.000 |1.000 |1.000 |500 | |th |1.000 |0.998 |0.999 |500 | |tr |0.994 |0.992 |0.993 |500 | |ur |1.000 |1.000 |1.000 |500 | |vi |0.992 |1.000 |0.996 |500 | |zh |1.000 |1.000 |1.000 |500 | ### Benchmarks As a baseline to compare `xlm-roberta-base-language-detection` against, we have used the Python [langid](https://github.com/saffsd/langid.py) library. Since it comes pre-trained on 97 languages, we have used its `.set_languages()` method to constrain the language set to our 20 languages. The average accuracy of langid on the test set is **98.5%**. More details are provided by the table below. | Language | Precision | Recall | F1-score | support | |:--------:|:---------:|:------:|:--------:|:-------:| |ar |0.990 |0.970 |0.980 |500 | |bg |0.998 |0.964 |0.981 |500 | |de |0.992 |0.944 |0.967 |500 | |el |1.000 |0.998 |0.999 |500 | |en |1.000 |1.000 |1.000 |500 | |es |1.000 |0.968 |0.984 |500 | |fr |0.996 |1.000 |0.998 |500 | |hi |0.949 |0.976 |0.963 |500 | |it |0.990 |0.980 |0.985 |500 | |ja |0.927 |0.988 |0.956 |500 | |nl |0.980 |1.000 |0.990 |500 | |pl |0.986 |0.996 |0.991 |500 | |pt |0.950 |0.996 |0.973 |500 | |ru |0.996 |0.974 |0.985 |500 | |sw |1.000 |1.000 |1.000 |500 | |th |1.000 |0.996 |0.998 |500 | |tr |0.990 |0.968 |0.979 |500 | |ur |0.998 |0.996 |0.997 |500 | |vi |0.971 |0.990 |0.980 |500 | |zh |1.000 |1.000 |1.000 |500 | ## Training procedure Fine-tuning was done via the `Trainer` API. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results The validation results on the `valid` split of the Language Identification dataset are summarised here below. | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2492 | 1.0 | 1094 | 0.0149 | 0.9969 | 0.9969 | | 0.0101 | 2.0 | 2188 | 0.0103 | 0.9977 | 0.9977 | In short, it achieves the following results on the validation set: - Loss: 0.0101 - Accuracy: 0.9977 - F1: 0.9977 ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
5,748
microsoft/DialogRPT-depth
null
# Demo Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) | Context | Response | `depth` score | | :------ | :------- | :------------: | | I love NLP! | Can anyone recommend a nice review paper? | 0.724 | | I love NLP! | Me too! | 0.032 | The `depth` score predicts how likely the response is getting a long follow-up discussion thread. # DialogRPT-depth ### Dialog Ranking Pretrained Transformers > How likely a dialog response is upvoted 👍 and/or gets replied 💬? This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict. It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates. Quick Links: * [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/) * [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT) * [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) We considered the following tasks and provided corresponding pretrained models. |Task | Description | Pretrained model | | :------------- | :----------- | :-----------: | | **Human feedback** | **given a context and its two human responses, predict...**| | `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) | | `width`| ... which gets more direct replies? | [model card](https://huggingface.co/microsoft/DialogRPT-width) | | `depth`| ... which gets longer follow-up thread? | this model | | **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** | | `human_vs_rand`| ... a random human response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-rand) | | `human_vs_machine`| ... a machine generated response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-machine) | ### Contact: Please create an issue on [our repo](https://github.com/golsun/DialogRPT) ### Citation: ``` @inproceedings{gao2020dialogrpt, title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data}, author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan}, year={2020}, booktitle={EMNLP} } ```
2,634
dhtocks/Topic-Classification
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
Entry not found
15
huggingface/distilbert-base-uncased-finetuned-mnli
[ "contradiction", "entailment", "neutral" ]
Entry not found
15
Gunulhona/tbbcmodel
[ "Agitation", "Changes in Appetite", "Changes in Sleeping Pattern", "Concentration Difficultiy", "Crying", "Gulity Feelings", "Indecisivness", "Irritability", "Loss of Energy", "Loss of Interest", "Loss of Interest in Sex", "Loss of Pleasure", "Non BDI", "Past Failure", "Pessimism", "Punishment Feelings", "Sadness", "Self-Criticalness", "Self-Dislike", "Suicidal Thoughts or Wishes", "Tiredness or Fatigue", "Worthlessness" ]
Entry not found
15
Elron/bleurt-base-512
[ "LABEL_0" ]
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-512") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-512") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([1.0327, 0.2055]) ```
999
Gunulhona/tbnymodel
[ "Negative", "Non Related", "Positive" ]
Entry not found
15
Gunulhona/tbecmodel
[ "ANGER", "DISGUST", "FEAR", "HAPPINESS", "NEUTRALITY", "SADNESS", "SURPRISED" ]
Entry not found
15
valurank/distilroberta-news-small
[ "bad", "good", "medium" ]
--- license: other language: en datasets: - valurank/news-small --- # DistilROBERTA fine-tuned for news classification This model is based on [distilroberta-base](https://huggingface.co/distilroberta-base) pretrained weights, with a classification head fine-tuned to classify news articles into 3 categories (bad, medium, good). ## Training data The dataset used to fine-tune the model is [news-small](https://huggingface.co/datasets/valurank/news-small), the 300 article news dataset manually annotated by Alex. ## Inputs Similar to its base model, this model accepts inputs with a maximum length of 512 tokens.
618
Seethal/sentiment_analysis_generic_dataset
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
## BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between english and English. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: * Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. * Next sentence prediction (NSP): the model concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model description [Seethal/sentiment_analysis_generic_dataset] This is a fine-tuned downstream version of the bert-base-uncased model for sentiment analysis, this model is not intended for further downstream fine-tuning for any other tasks. This model is trained on a classified dataset for text classification.
1,988
textattack/bert-base-uncased-snli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
razent/SciFive-base-Pubmed
null
--- language: - en tags: - token-classification - text-classification - question-answering - text2text-generation - text-generation datasets: - pubmed --- # SciFive Pubmed Base ## Introduction Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598) Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_ ## How to use For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-base-Pubmed") model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-base-Pubmed") ​ sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ." text = "ncbi_ner: " + sentence + " </s>" encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
1,375
textattack/albert-base-v2-MRPC
null
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.8970588235294118, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
620
HooshvareLab/bert-fa-base-uncased-clf-persiannews
[ "اجتماعی", "اقتصادی", "بین الملل", "سیاسی", "علمی فناوری", "فرهنگی هنری", "ورزشی", "پزشکی" ]
--- language: fa license: apache-2.0 --- # ParsBERT (v2.0) A Transformer-based Model for Persian Language Understanding We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models. ## Persian Text Classification [DigiMag, Persian News] The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`. ### Persian News A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes. 1. Economic 2. International 3. Political 4. Science Technology 5. Cultural Art 6. Sport 7. Medical | Label | # | |:------------------:|:----:| | Social | 2170 | | Economic | 1564 | | International | 1975 | | Political | 2269 | | Science Technology | 2436 | | Cultural Art | 2558 | | Sport | 1381 | | Medical | 2085 | **Download** You can download the dataset from [here](https://drive.google.com/uc?id=1B6xotfXCcW9xS1mYSBQos7OCg0ratzKC) ## Results The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures. | Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | |:-----------------:|:-----------:|:-----------:|:-----:| | Persian News | 97.44* | 97.19 | 95.79 | ## How to use :hugs: | Task | Notebook | |---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Text Classification | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
2,767
cardiffnlp/bertweet-base-sentiment
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
0
assemblyai/distilbert-base-uncased-sst2
null
# DistilBERT-Base-Uncased for Sentiment Analysis This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [Stanford Sentiment Treebank v2 (SST2)](https://nlp.stanford.edu/sentiment/); part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post](). ## Usage To download and utilize this model for sentiment analysis please execute the following: ```python import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("assemblyai/distilbert-base-uncased-sst2") model = AutoModelForSequenceClassification.from_pretrained("assemblyai/distilbert-base-uncased-sst2") tokenized_segments = tokenizer(["AssemblyAI is the best speech-to-text API for modern developers with performance being second to none!"], return_tensors="pt", padding=True, truncation=True) tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1) print("Positive probability: "+str(model_predictions[0][1].item()*100)+"%") print("Negative probability: "+str(model_predictions[0][0].item()*100)+"%") ``` For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)!
1,779