modelId
stringlengths
6
107
label
list
readme
stringlengths
0
56.2k
readme_len
int64
0
56.2k
cross-encoder/nli-deberta-v3-xsmall
[ "contradiction", "entailment", "neutral" ]
--- language: en pipeline_tag: zero-shot-classification tags: - microsoft/deberta-v3-xsmall datasets: - multi_nli - snli metrics: - accuracy license: apache-2.0 --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 91.64 - Accuracy on MNLI mismatched set: 87.77 For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-v3-xsmall') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-xsmall') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-xsmall') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-xsmall') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
2,791
StevenLimcorn/indonesian-roberta-base-emotion-classifier
[ "anger", "fear", "happy", "love", "sadness" ]
--- language: id tags: - roberta license: mit datasets: - indonlu widget: - text: "Hal-hal baik akan datang." --- # Indo RoBERTa Emotion Classifier Indo RoBERTa Emotion Classifier is emotion classifier based on [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) model. It was trained on the trained on [IndoNLU EmoT](https://huggingface.co/datasets/indonlu) dataset. The model used was [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) and was transfer-learned to an emotion classifier model. Based from the [IndoNLU bencmark](https://www.indobenchmark.com/), the model achieve an f1-macro of 72.05%, accuracy of 71.81%, precision of 72.47% and recall of 71.94%. ## Model The model was trained on 7 epochs with learning rate 2e-5. Achieved different metrics as shown below. | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | |-------|---------------|-----------------|----------|----------|-----------|----------| | 1 | 1.300700 | 1.005149 | 0.622727 | 0.601846 | 0.640845 | 0.611144 | | 2 | 0.806300 | 0.841953 | 0.686364 | 0.694096 | 0.701984 | 0.696657 | | 3 | 0.591900 | 0.796794 | 0.686364 | 0.696573 | 0.707520 | 0.691671 | | 4 | 0.441200 | 0.782094 | 0.722727 | 0.724359 | 0.725985 | 0.730229 | | 5 | 0.334700 | 0.809931 | 0.711364 | 0.720550 | 0.718318 | 0.724608 | | 6 | 0.268400 | 0.812771 | 0.718182 | 0.724192 | 0.721222 | 0.729195 | | 7 | 0.226000 | 0.828461 | 0.725000 | 0.733625 | 0.731709 | 0.735800 | ## How to Use ### As Text Classifier ```python from transformers import pipeline pretrained_name = "StevenLimcorn/indonesian-roberta-base-emotion-classifier" nlp = pipeline( "sentiment-analysis", model=pretrained_name, tokenizer=pretrained_name ) nlp("Hal-hal baik akan datang.") ``` ## Disclaimer Do consider the biases which come from both the pre-trained RoBERTa model and the `EmoT` dataset that may be carried over into the results of this model. ## Author Indonesian RoBERTa Base Emotion Classifier was trained and evaluated by [Steven Limcorn](https://github.com/stevenlimcorn). All computation and development are done on Google Colaboratory using their free GPU access.
2,326
alisawuffles/roberta-large-wanli
[ "contradiction", "entailment", "neutral" ]
--- widget: - text: "I almost forgot to eat lunch.</s></s>I didn't forget to eat lunch." - text: "I almost forgot to eat lunch.</s></s>I forgot to eat lunch." - text: "I ate lunch.</s></s>I almost forgot to eat lunch." --- This is an off-the-shelf roberta-large model finetuned on WANLI, the Worker-AI Collaborative NLI dataset ([Liu et al., 2022](https://arxiv.org/abs/2201.05955)). It outperforms the `roberta-large-mnli` model on seven out-of-domain test sets, including by 11% on HANS and 9% on Adversarial NLI. ### How to use ```python from transformers import RobertaTokenizer, RobertaForSequenceClassification model = RobertaForSequenceClassification.from_pretrained('alisawuffles/roberta-large-wanli') tokenizer = RobertaTokenizer.from_pretrained('alisawuffles/roberta-large-wanli') x = tokenizer("I almost forgot to eat lunch.", "I didn't forget to eat lunch.", hypothesis, return_tensors='pt', max_length=128, truncation=True) logits = model(**x).logits probs = logits.softmax(dim=1).squeeze(0) label_id = torch.argmax(probs).item() prediction = model.config.id2label[label_id] ``` ### Citation ``` @misc{liu-etal-2022-wanli, title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation", author = "Liu, Alisa and Swayamdipta, Swabha and Smith, Noah A. and Choi, Yejin", month = jan, year = "2022", url = "https://arxiv.org/pdf/2201.05955", } ```
1,436
ChrisUPM/BioBERT_Re_trained
null
PyTorch trained model on GAD dataset for relation classification, using BioBert weights.
88
Jeevesh8/goog_bert_ft_cola-86
null
Entry not found
15
textattack/albert-base-v2-CoLA
null
## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.8245445829338447, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
530
LiYuan/amazon-query-product-ranking
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-mnli-amazon-query-shopping results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mnli-amazon-query-shopping This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an [Amazon shopping query dataset](https://www.aicrowd.com/challenges/esci-challenge-for-improving-product-search). The code for the fine-tuning process can be found [here](https://github.com/vanderbilt-data-science/sna). This model is uncased: it does not make a difference between english and English. It achieves the following results on the evaluation set: - Loss: 0.8244 - Accuracy: 0.6617 ## Model description DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts using the BERT base model. We replaced its head with our shopping relevance category to fine-tune it on 571,223 rows of training set while validate it on 142,806 rows of dev set. Finally, we evaluated our model performance on a held-out test set: 79,337 rows. ## Intended uses & limitations DistilBERT is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. This fine-tuned version of DistilBERT is used to predict the relevance between one query and one product description. It also can be used to rerank the relevance order of products given one query for the amazon platform or other e-commerce platforms. The limitations are this trained model is focusing on queries and products on Amazon. If you apply this model to other domains, it may perform poorly. ## How to use You can use this model directly by downloading the trained weights and configurations like the below code snippet: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("LiYuan/amazon-query-product-ranking") model = AutoModelForSequenceClassification.from_pretrained("LiYuan/amazon-query-product-ranking") ``` ## Training and evaluation data Download all the raw [dataset](https://www.aicrowd.com/challenges/esci-challenge-for-improving-product-search/dataset_files) from the Amazon KDD Cup website. 1. Concatenate the all product attributes from the product dataset 2. Join it with a training query dataset 3. Stratified Split the merged data into 571,223-row training, 142,806-row validation, 79,337-row test set 4. Train on the full training set ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8981 | 1.0 | 35702 | 0.8662 | 0.6371 | | 0.7837 | 2.0 | 71404 | 0.8244 | 0.6617 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
3,784
Jeevesh8/goog_bert_ft_cola-87
null
Entry not found
15
Jeevesh8/goog_bert_ft_cola-88
null
Entry not found
15
jb2k/bert-base-multilingual-cased-language-detection
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28", "LABEL_29", "LABEL_3", "LABEL_30", "LABEL_31", "LABEL_32", "LABEL_33", "LABEL_34", "LABEL_35", "LABEL_36", "LABEL_37", "LABEL_38", "LABEL_39", "LABEL_4", "LABEL_40", "LABEL_41", "LABEL_42", "LABEL_43", "LABEL_44", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
# bert-base-multilingual-cased-language-detection A model for language detection with support for 45 languages ## Model description This model was created by fine-tuning [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the [common language](https://huggingface.co/datasets/common_language) dataset. This dataset has support for 45 languages, which are listed below: ``` Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh ``` ## Evaluation This model was evaluated on the test split of the [common language](https://huggingface.co/datasets/common_language) dataset, and achieved the following metrics: * Accuracy: 97.8%
1,050
oliverqq/scibert-uncased-topics
[ "Artificial intelligence", "Computer science", "Economics", "Engineering", "Mathematics", "Medicine", "Psychology", "Sociology" ]
Entry not found
15
Cameron/BERT-rtgender-opgender-annotations
null
Entry not found
15
Jeevesh8/goog_bert_ft_cola-89
null
Entry not found
15
Intel/bert-base-uncased-mrpc
[ "equivalent", "not_equivalent" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: bert-base-uncased-mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8602941176470589 - name: F1 type: f1 value: 0.9042016806722689 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6978 - Accuracy: 0.8603 - F1: 0.9042 - Combined Score: 0.8822 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu102 - Datasets 1.14.0 - Tokenizers 0.11.6
1,312
Jeevesh8/goog_bert_ft_cola-90
null
Entry not found
15
ethanyt/guwen-sent
[ "Neg", "ImpNeg", "Nerual", "ImpPos", "Pos" ]
--- language: - "zh" thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png" tags: - "chinese" - "classical chinese" - "literary chinese" - "ancient chinese" - "bert" - "pytorch" - "sentiment classificatio" license: "apache-2.0" pipeline_tag: "text-classification" widget: - text: "滚滚长江东逝水,浪花淘尽英雄" - text: "寻寻觅觅,冷冷清清,凄凄惨惨戚戚" - text: "执手相看泪眼,竟无语凝噎,念去去,千里烟波,暮霭沉沉楚天阔。" - text: "忽如一夜春风来,干树万树梨花开" --- # Guwen Sent A Classical Chinese Poem Sentiment Classifier. See also: <a href="https://github.com/ethan-yt/guwen-models"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a> <a href="https://github.com/ethan-yt/cclue/"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a> <a href="https://github.com/ethan-yt/guwenbert/"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a>
1,326
Jeevesh8/goog_bert_ft_cola-91
null
Entry not found
15
Jeevesh8/goog_bert_ft_cola-92
null
Entry not found
15
mrm8488/distilroberta-finetuned-age_news-classification
[ "World", "Sports", "Business", "Sci/Tech" ]
--- language: en tags: - news - classification datasets: - ag_news widget: - text: "Venezuela Prepares for Chavez Recall Vote Supporters and rivals warn of possible fraud; government says Chavez's defeat could produce turmoil in world oil market." --- # distilroberta-base fine-tuned on age_news dataset for news classification Test set accuray: 0.94
352
m-newhauser/distilbert-political-tweets
[ "Democrat", "Republican" ]
--- language: - en license: lgpl-3.0 library_name: transformers tags: - text-classification - transformers - pytorch - generated_from_keras_callback metrics: - accuracy - f1 datasets: - m-newhauser/senator-tweets widget: - text: "This pandemic has shown us clearly the vulgarity of our healthcare system. Highest costs in the world, yet not enough nurses or doctors. Many millions uninsured, while insurance company profits soar. The struggle continues. Healthcare is a human right. Medicare for all." example_title: "Bernie Sanders (D)" - text: "Team Biden would rather fund the Ayatollah's Death to America regime than allow Americans to produce energy for our own domestic consumption." example_title: "Ted Cruz (R)" --- # distilbert-political-tweets 🗣 🇺🇸 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [m-newhauser/senator-tweets](https://huggingface.co/datasets/m-newhauser/senator-tweets) dataset, which contains all tweets made by United States senators during the first year of the Biden Administration. It achieves the following results on the evaluation set: * Accuracy: 0.9076 * F1: 0.9117 ## Model description The goal of this model is to classify short pieces of text as having either Democratic or Republican sentiment. The model was fine-tuned on 99,693 tweets (51.6% Democrat, 48.4% Republican) made by US senators in 2021. Model accuracy may not hold up on pieces of text longer than a tweet. ### Training hyperparameters The following hyperparameters were used during training: - optimizer: Adam - training_precision: float32 - learning_rate = 5e-5 - num_epochs = 5 ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.6
1,771
Jeevesh8/goog_bert_ft_cola-93
null
Entry not found
15
SkolkovoInstitute/xlmr_formality_classifier
[ "formal", "informal" ]
--- language: - en - fr - it - pt tags: - formal or informal classification licenses: - cc-by-nc-sa --- XLMRoberta-based classifier trained on XFORMAL. all | | precision | recall | f1-score | support | |--------------|-----------|----------|----------|---------| | 0 | 0.744912 | 0.927790 | 0.826354 | 108019 | | 1 | 0.889088 | 0.645630 | 0.748048 | 96845 | | accuracy | | | 0.794405 | 204864 | | macro avg | 0.817000 | 0.786710 | 0.787201 | 204864 | | weighted avg | 0.813068 | 0.794405 | 0.789337 | 204864 | en | | precision | recall | f1-score | support | |--------------|-----------|----------|----------|---------| | 0 | 0.800053 | 0.962981 | 0.873988 | 22151 | | 1 | 0.945106 | 0.725899 | 0.821124 | 19449 | | accuracy | | | 0.852139 | 41600 | | macro avg | 0.872579 | 0.844440 | 0.847556 | 41600 | | weighted avg | 0.867869 | 0.852139 | 0.849273 | 41600 | fr | | precision | recall | f1-score | support | |--------------|-----------|----------|----------|---------| | 0 | 0.746709 | 0.925738 | 0.826641 | 21505 | | 1 | 0.887305 | 0.650592 | 0.750731 | 19327 | | accuracy | | | 0.795504 | 40832 | | macro avg | 0.817007 | 0.788165 | 0.788686 | 40832 | | weighted avg | 0.813257 | 0.795504 | 0.790711 | 40832 | it | | precision | recall | f1-score | support | |--------------|-----------|----------|----------|---------| | 0 | 0.721282 | 0.914669 | 0.806545 | 21528 | | 1 | 0.864887 | 0.607135 | 0.713445 | 19368 | | accuracy | | | 0.769024 | 40896 | | macro avg | 0.793084 | 0.760902 | 0.759995 | 40896 | | weighted avg | 0.789292 | 0.769024 | 0.762454 | 40896 | pt | | precision | recall | f1-score | support | |--------------|-----------|----------|----------|---------| | 0 | 0.717546 | 0.908167 | 0.801681 | 21637 | | 1 | 0.853628 | 0.599700 | 0.704481 | 19323 | | accuracy | | | 0.762646 | 40960 | | macro avg | 0.785587 | 0.753933 | 0.753081 | 40960 | | weighted avg | 0.781743 | 0.762646 | 0.755826 | 40960 | ## How to use ```python from transformers import XLMRobertaTokenizerFast, XLMRobertaForSequenceClassification # load tokenizer and model weights tokenizer = XLMRobertaTokenizerFast.from_pretrained('SkolkovoInstitute/xlmr_formality_classifier') model = XLMRobertaForSequenceClassification.from_pretrained('SkolkovoInstitute/xlmr_formality_classifier') # prepare the input batch = tokenizer.encode('ты супер', return_tensors='pt') # inference model(batch) ``` ## Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
3,099
Jeevesh8/goog_bert_ft_cola-94
null
Entry not found
15
yseop/distilbert-base-financial-relation-extraction
[ "are", "has", "is", "is in", "x" ]
--- inference: true pipeline_tag: text-classification tags: - feature-extraction - text-classification library: pytorch --- <div style="clear: both;"> <div style="float: left; margin-right 1em;"> <h1><strong>FReE (Financial Relation Extraction)</strong></h1> </div> <div> <h2><img src="https://pbs.twimg.com/profile_images/1333760924914753538/fQL4zLUw_400x400.png" alt="" width="25" height="25"></h2> </div> </div> We present FReE, a [DistilBERT](https://huggingface.co/distilbert-base-uncased) base model fine-tuned on a custom financial dataset for financial relation type detection and classification. ## Process Detecting the presence of a relationship between financial terms and qualifying the relationship in case of its presence. Example use cases: * An A-B trust is a joint trust created by a married couple for the purpose of minimizing estate taxes. (<em>Relationship **exists**, type: **is**</em>) * There are no withdrawal penalties. (<em>Relationship **does not exist**, type: **x**</em>) ## Data The data consists of financial definitions collected from different sources (Wikimedia, IFRS, Investopedia) for financial indicators. Each definition has been split up into sentences, and term relationships in a sentence have been extracted using the [Stanford Open Information Extraction](https://nlp.stanford.edu/software/openie.html) module. A typical row in the dataset consists of a definition sentence and its corresponding relationship label. The labels were restricted to the 5 most-widely identified relationships, namely: **x** (no relationship), **has**, **is in**, **is** and **are**. ## Model The model used is a standard DistilBERT-base transformer model from the Hugging Face library. See [HUGGING FACE DistilBERT base model](https://huggingface.co/distilbert-base-uncased) for more details about the model. In addition, the model has been pretrained to initializa weigths that would otherwise be unused if loaded from an existing pretrained stock model. ## Metrics The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set. | relation | precision | recall | f1-score | support | | ------------- |:-------------:|:-------------:|:-------------:| -----:| | has | 0.7416 | 0.9674 | 0.8396 | 2362 | | is in | 0.7813 | 0.7925 | 0.7869 | 2362 | | is | 0.8650 | 0.6863 | 0.7653 | 2362 | | are | 0.8365 | 0.8493 | 0.8429 | 2362 | | x | 0.9515 | 0.8302 | 0.8867 | 2362 | | | | | | | | macro avg | 0.8352 | 0.8251 | 0.8243 | 11810 | | weighted avg | 0.8352 | 0.8251 | 0.8243 | 11810 |
2,655
Jeevesh8/goog_bert_ft_cola-95
null
Entry not found
15
Jeevesh8/goog_bert_ft_cola-96
null
Entry not found
15
ans/vaccinating-covid-tweets
[ "false", "misleading", "true" ]
--- language: en license: apache-2.0 datasets: - tweets widget: - text: "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." --- # Disclaimer: This page is under maintenance. Please DO NOT refer to the information on this page to make any decision yet. # Vaccinating COVID tweets A fine-tuned model for fact-classification task on English tweets about COVID-19/vaccine. ## Intended uses & limitations You can classify if the input tweet (or any others statement) about COVID-19/vaccine is `true`, `false` or `misleading`. Note that since this model was trained with data up to May 2020, the most recent information may not be reflected. #### How to use You can use this model directly on this page or using `transformers` in python. - Load pipeline and implement with input sequence ```python from transformers import pipeline pipe = pipeline("sentiment-analysis", model = "ans/vaccinating-covid-tweets") seq = "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." pipe(seq) ``` - Expected output ```python [ { "label": "false", "score": 0.07972867041826248 }, { "label": "misleading", "score": 0.019911376759409904 }, { "label": "true", "score": 0.9003599882125854 } ] ``` - `true` examples ```python "By the end of 2020, several vaccines had become available for use in different parts of the world." "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." "RNA vaccines were the first vaccines for SARS-CoV-2 to be produced and represent an entirely new vaccine approach." ``` - `false` examples ```python "COVID-19 vaccine caused new strain in UK." ``` #### Limitations and bias To conservatively classify whether an input sequence is true or not, the model may have predictions biased toward `false` or `misleading`. ## Training data & Procedure #### Pre-trained baseline model - Pre-trained model: [BERTweet](https://github.com/VinAIResearch/BERTweet) - trained based on the RoBERTa pre-training procedure - 850M General English Tweets (Jan 2012 to Aug 2019) - 23M COVID-19 English Tweets - Size of the model: >134M parameters - Further training - Pre-training with recent COVID-19/vaccine tweets and fine-tuning for fact classification #### 1) Pre-training language model - The model was pre-trained on COVID-19/vaccined related tweets using a masked language modeling (MLM) objective starting from BERTweet. - Following datasets on English tweets were used: - Tweets with trending #CovidVaccine hashtag, 207,000 tweets uploaded across Aug 2020 to Apr 2021 ([kaggle](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets)) - Tweets about all COVID-19 vaccines, 78,000 tweets uploaded across Dec 2020 to May 2021 ([kaggle](https://www.kaggle.com/gpreda/all-covid19-vaccines-tweets)) - COVID-19 Twitter chatter dataset, 590,000 tweets uploaded across Mar 2021 to May 2021 ([github](https://github.com/thepanacealab/covid19_twitter)) #### 2) Fine-tuning for fact classification - A fine-tuned model from pre-trained language model (1) for fact-classification task on COVID-19/vaccine. - COVID-19/vaccine-related statements were collected from [Poynter](https://www.poynter.org/ifcn-covid-19-misinformation/) and [Snopes](https://www.snopes.com/) using Selenium resulting in over 14,000 fact-checked statements from Jan 2020 to May 2021. - Original labels were divided within following three categories: - `False`: includes false, no evidence, manipulated, fake, not true, unproven and unverified - `Misleading`: includes misleading, exaggerated, out of context and needs context - `True`: includes true and correct ## Evaluation results | Training loss | Validation loss | Training accuracy | Validation accuracy | | --- | --- | --- | --- | | 0.1062 | 0.1006 | 96.3% | 94.5% | # Contributors - This model is a part of final team project from MLDL for DS class at SNU. - Team BIBI - Vaccinating COVID-NineTweets - Team members: Ahn, Hyunju; An, Jiyong; An, Seungchan; Jeong, Seokho; Kim, Jungmin; Kim, Sangbeom - Advisor: Prof. Wen-Syan Li <a href="https://gsds.snu.ac.kr/"><img src="https://gsds.snu.ac.kr/wp-content/uploads/sites/50/2021/04/GSDS_logo2-e1619068952717.png" width="200" height="80"></a>
4,394
Jeevesh8/goog_bert_ft_cola-97
null
Entry not found
15
textattack/distilbert-base-cased-QQP
null
Entry not found
15
Kayvane/distilvert-complaints-subproduct
[ "", "(CD) Certificate of deposit", "Auto", "Auto debt", "CD (Certificate of Deposit)", "Cashing a check without an account", "Check cashing", "Check cashing service", "Checking account", "Conventional adjustable mortgage (ARM)", "Conventional fixed mortgage", "Conventional home mortgage", "Credit card", "Credit card debt", "Credit repair", "Credit repair services", "Credit reporting", "Debt settlement", "Domestic (US) money transfer", "Electronic Benefit Transfer / EBT card", "FHA mortgage", "Federal student loan", "Federal student loan debt", "Federal student loan servicing", "Foreign currency exchange", "General purpose card", "General-purpose credit card or charge card", "General-purpose prepaid card", "Gift card", "Gift or merchant card", "Government benefit card", "Government benefit payment card", "Home equity loan or line of credit", "Home equity loan or line of credit (HELOC)", "I do not know", "ID prepaid card", "Installment loan", "International money transfer", "Lease", "Loan", "Medical", "Medical debt", "Mobile or digital wallet", "Mobile wallet", "Money order", "Mortgage", "Mortgage debt", "Non-federal student loan", "Other (i.e. phone, health club, etc.)", "Other bank product/service", "Other banking product or service", "Other debt", "Other mortgage", "Other personal consumer report", "Other special purpose card", "Other type of mortgage", "Pawn loan", "Payday loan", "Payday loan debt", "Payroll card", "Personal line of credit", "Private student loan", "Private student loan debt", "Refund anticipation check", "Reverse mortgage", "Savings account", "Second mortgage", "Store credit card", "Student prepaid card", "Title loan", "Transit card", "Traveler's check or cashier's check", "Traveler’s/Cashier’s checks", "VA mortgage", "Vehicle lease", "Vehicle loan", "Virtual currency" ]
Entry not found
15
moussaKam/frugalscore_medium_bert-base_bert-score
[ "LABEL_0" ]
# FrugalScore FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance Paper: https://arxiv.org/abs/2110.08559?context=cs Project github: https://github.com/moussaKam/FrugalScore The pretrained checkpoints presented in the paper : | FrugalScore | Student | Teacher | Method | |----------------------------------------------------|-------------|----------------|------------| | [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore | | [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore | | [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore | | [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore | | [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore | | [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
2,592
Jeevesh8/goog_bert_ft_cola-98
null
Entry not found
15
castorini/monobert-large-msmarco-finetune-only
null
# Model Description This checkpoint is a direct conversion of [BERT_Large_trained_on_MSMARCO.zip](https://drive.google.com/open?id=1crlASTMlsihALlkabAQP6JTYIZwC1Wm8) from the original [repo](https://github.com/nyu-dl/dl4marco-bert/). The corresponding model class is BertForSequenceClassification, and its purpose is for MS MARCO passage ranking. Please find the original repo for more detail of its training settings regarding hyperparameter/device/data.
455
Jeevesh8/goog_bert_ft_cola-99
null
Entry not found
15
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5
[ "Algeria", "Bahrain", "Djibouti", "Egypt", "Iraq", "Jordan", "Kuwait", "Lebanon", "Libya", "Mauritania", "Morocco", "Oman", "Palestine", "Qatar", "Saudi_Arabia", "Somalia", "Sudan", "Syria", "Tunisia", "United_Arab_Emirates", "Yemen" ]
--- language: - ar license: apache-2.0 widget: - text: "عامل ايه ؟" --- # CAMeLBERT-MSA DID MADAR Twitter-5 Model ## Model description **CAMeLBERT-MSA DID MADAR Twitter-5 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model. For the fine-tuning, we used the [MADAR Twitter-5](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 21 labels. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5') >>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟'] >>> did(sentences) [{'label': 'Egypt', 'score': 0.5741344094276428}, {'label': 'Kuwait', 'score': 0.5225679278373718}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
2,968
MutazYoune/Absa_AspectSentiment_hotels
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
cardiffnlp/twitter-roberta-base-stance-abortion
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
0
mrm8488/deberta-v3-base-goemotions
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: deberta-v3-base-goemotions results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-base-goemotions This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7610 - F1: 0.4468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.5709 | 1.0 | 6164 | 1.5211 | 0.4039 | | 1.3689 | 2.0 | 12328 | 1.5466 | 0.4198 | | 1.1819 | 3.0 | 18492 | 1.5670 | 0.4520 | | 1.0059 | 4.0 | 24656 | 1.6673 | 0.4479 | | 0.8129 | 5.0 | 30820 | 1.7610 | 0.4468 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
1,583
jakelever/coronabert
[ "Clinical Reports", "Comment/Editorial", "Communication", "Contact Tracing", "Diagnostics", "Drug Targets", "Education", "Effect on Medical Specialties", "Forecasting & Modelling", "Health Policy", "Healthcare Workers", "Imaging", "Immunology", "Inequality", "Infection Reports", "Long Haul", "Medical Devices", "Meta-analysis", "Misinformation", "Model Systems & Tools", "Molecular Biology", "News", "Non-human", "Non-medical", "Pediatrics", "Prevalence", "Prevention", "Psychology", "Recommendations", "Review", "Risk Factors", "Surveillance", "Therapeutics", "Transmission", "Vaccines" ]
--- language: en thumbnail: https://coronacentral.ai/logo-with-name.png?1 tags: - coronavirus - covid - bionlp datasets: - cord19 - pubmed license: mit widget: - text: "Pre-existing T-cell immunity to SARS-CoV-2 in unexposed healthy controls in Ecuador, as detected with a COVID-19 Interferon-Gamma Release Assay." - text: "Lifestyle and mental health disruptions during COVID-19." - text: "More than 50 Long-term effects of COVID-19: a systematic review and meta-analysis" --- # CoronaCentral BERT Model for Topic / Article Type Classification This is the topic / article type multi-label classification for the [CoronaCentral website](https://coronacentral.ai). This forms part of the pipeline for downloading and processing coronavirus literature described in the [corona-ml repo](https://github.com/jakelever/corona-ml) with available [step-by-step descriptions](https://github.com/jakelever/corona-ml/blob/master/stepByStep.md). The method is described in the [preprint](https://doi.org/10.1101/2020.12.21.423860) and detailed performance results can be found in the [machine learning details](https://github.com/jakelever/corona-ml/blob/master/machineLearningDetails.md) document. This model was derived by fine-tuning the [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) model on this coronavirus sequence (document) classification task. ## Usage Below are two Google Colab notebooks with example usage of this sequence classification model using HuggingFace transformers and KTrain. - [HuggingFace example on Google Colab](https://colab.research.google.com/drive/1cBNgKd4o6FNWwjKXXQQsC_SaX1kOXDa4?usp=sharing) - [KTrain example on Google Colab](https://colab.research.google.com/drive/1h7oJa2NDjnBEoox0D5vwXrxiCHj3B1kU?usp=sharing) ## Training Data The model is trained on ~3200 manually-curated articles sampled at various stages during the coronavirus pandemic. The code for training is available in the [category\_prediction](https://github.com/jakelever/corona-ml/tree/master/category_prediction) directory of the main Github Repo. The data is available in the [annotated_documents.json.gz](https://github.com/jakelever/corona-ml/blob/master/category_prediction/annotated_documents.json.gz) file. ## Inputs and Outputs The model takes in a tokenized title and abstract (combined into a single string and separated by a new line). The outputs are topics and article types, broadly called categories in the pipeline code. The types are listed below. Some others are managed by hand-coded rules described in the [step-by-step descriptions](https://github.com/jakelever/corona-ml/blob/master/stepByStep.md). ### List of Article Types - Comment/Editorial - Meta-analysis - News - Review ### List of Topics - Clinical Reports - Communication - Contact Tracing - Diagnostics - Drug Targets - Education - Effect on Medical Specialties - Forecasting & Modelling - Health Policy - Healthcare Workers - Imaging - Immunology - Inequality - Infection Reports - Long Haul - Medical Devices - Misinformation - Model Systems & Tools - Molecular Biology - Non-human - Non-medical - Pediatrics - Prevalence - Prevention - Psychology - Recommendations - Risk Factors - Surveillance - Therapeutics - Transmission - Vaccines
3,313
TransQuest/monotransquest-da-en_any
[ "LABEL_0" ]
--- language: en-multilingual tags: - Quality Estimation - monotransquest - DA license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-en_any", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
5,407
ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa
[ "Positive", "Neutral", "Negative" ]
--- license: mit tags: - generated_from_trainer datasets: - indonlu metrics: - accuracy model-index: - name: bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa results: - task: name: Text Classification type: text-classification dataset: name: indonlu type: indonlu args: smsa metrics: - name: Accuracy type: accuracy value: 0.9373015873015873 language: id widget: - text: "Saya mengapresiasi usaha anda" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.3390 - Accuracy: 0.9373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2864 | 1.0 | 688 | 0.2154 | 0.9286 | | 0.1648 | 2.0 | 1376 | 0.2238 | 0.9357 | | 0.0759 | 3.0 | 2064 | 0.3351 | 0.9365 | | 0.044 | 4.0 | 2752 | 0.3390 | 0.9373 | | 0.0308 | 5.0 | 3440 | 0.4346 | 0.9365 | | 0.0113 | 6.0 | 4128 | 0.4708 | 0.9365 | | 0.006 | 7.0 | 4816 | 0.5533 | 0.9325 | | 0.0047 | 8.0 | 5504 | 0.5888 | 0.9310 | | 0.0001 | 9.0 | 6192 | 0.5961 | 0.9333 | | 0.0 | 10.0 | 6880 | 0.5992 | 0.9357 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
2,303
SynamicTechnologies/CYBERT
null
## CYBERT BERT model dedicated to the domain of cyber security. The model has been trained on a corpus of high-quality cyber security and computer science text and is unlikely to work outside this domain. ##Model architecture The model architecture used is original Roberta and tokenizer to train the corpus is Byte Level. ##Hardware The model is trained on GPU NVIDIA-SMI 510.54
388
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
[ "Algeria", "Bahrain", "Djibouti", "Egypt", "Iraq", "Jordan", "Kuwait", "Lebanon", "Libya", "Mauritania", "Morocco", "Oman", "Palestine", "Qatar", "Saudi_Arabia", "Somalia", "Sudan", "Syria", "Tunisia", "United_Arab_Emirates", "Yemen" ]
--- language: - ar license: apache-2.0 widget: - text: "عامل ايه ؟" --- # CAMeLBERT-Mix DID NADI Model ## Model description **CAMeLBERT-Mix DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model. For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi') >>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟'] >>> did(sentences) [{'label': 'Egypt', 'score': 0.920274019241333}, {'label': 'Saudi_Arabia', 'score': 0.26750022172927856}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
2,927
Elron/bleurt-tiny-512
[ "LABEL_0" ]
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([-0.9414, -0.5678]) ```
1,001
larskjeldgaard/senda
[ "negativ", "neutral", "positiv" ]
--- language: da tags: - danish - bert - sentiment - polarity license: cc-by-4.0 widget: - text: "Sikke en dejlig dag det er i dag" --- # Danish BERT fine-tuned for Sentiment Analysis (Polarity) This model detects polarity ('positive', 'neutral', 'negative') of danish texts. It is trained and tested on Tweets annotated by [Alexandra Institute](https://github.com/alexandrainst). Here is an example on how to load the model in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("larskjeldgaard/senda") model = AutoModelForSequenceClassification.from_pretrained("larskjeldgaard/senda") # create 'senda' sentiment analysis pipeline senda_pipeline = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) senda_pipeline("Sikke en dejlig dag det er i dag") ```
948
tae898/emoberta-base
[ "neutral", "joy", "surprise", "anger", "sadness", "disgust", "fear" ]
--- language: en tags: - emoberta - roberta license: mit datasets: - MELD - IEMOCAP --- Check https://github.com/tae898/erc for the details [Watch a demo video!](https://youtu.be/qbr7fNd6J28) # Emotion Recognition in Coversation (ERC) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/emoberta-speaker-aware-emotion-recognition-in/emotion-recognition-in-conversation-on)](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/emoberta-speaker-aware-emotion-recognition-in/emotion-recognition-in-conversation-on-meld)](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in) At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP) ## Prerequisites 1. An x86-64 Unix or Unix-like machine 1. Python 3.8 or higher 1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python. 1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule) 1. pip install -r requirements.txt ## EmoBERTa training First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then, In this directory run the below commands. I recommend you to run this in a virtualenv. ```sh python train-erc-text.py ``` This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`. ## Results on the test split (weighted f1 scores) | Model | | MELD | IEMOCAP | | -------- | ------------------------------- | :-------: | :-------: | | EmoBERTa | No past and future utterances | 63.46 | 56.09 | | | Only past utterances | 64.55 | **68.57** | | | Only future utterances | 64.23 | 66.56 | | | Both past and future utterances | **65.61** | 67.42 | | | → *without speaker names* | 65.07 | 64.02 | Above numbers are the mean values of five random seed runs. If you want to see more training test details, check out `./results/` If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file. ## Deployment ### Huggingface We have released our models on huggingface: - [emoberta-base](https://huggingface.co/tae898/emoberta-base) - [emoberta-large](https://huggingface.co/tae898/emoberta-large) They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you"). ### Flask app You can either run the Flask RESTful server app as a docker container or just as a python script. 1. Running the app as a docker container **(recommended)**. There are four images. Take what you need: - `docker run -it --rm -p 10006:10006 tae898/emoberta-base` - `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda` - `docker run -it --rm -p 10006:10006 tae898/emoberta-large` - `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda` 1. Running the app in your python environment: This method is less recommended than the docker one. Run `pip install -r requirements-deploy.txt` first.<br> The [`app.py`](app.py) is a flask RESTful server. The usage is below: ```console app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE] ``` For example: ```sh python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base ``` ### Client Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below: ```console client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT ``` For example: ```sh python client.py --text "Emotion recognition is so cool\!" ``` will give you: ```json { "neutral": 0.0049800905, "joy": 0.96399665, "surprise": 0.018937444, "anger": 0.0071516023, "sadness": 0.002021492, "disgust": 0.001495996, "fear": 0.0014167271 } ``` ## Troubleshooting The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive. ## Contributing Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**. 1. Fork the Project 1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`) 1. Run `make style && quality` in the root repo directory, to ensure code quality. 1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`) 1. Push to the Branch (`git push origin feature/AmazingFeature`) 1. Open a Pull Request ## Cite our work Check out the [paper](https://arxiv.org/abs/2108.12009). ```bibtex @misc{kim2021emoberta, title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa}, author={Taewoon Kim and Piek Vossen}, year={2021}, eprint={2108.12009}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` [![DOI](https://zenodo.org/badge/328375452.svg)](https://zenodo.org/badge/latestdoi/328375452)<br> ## Authors - [Taewoon Kim](https://taewoonkim.com/) ## License [MIT](https://choosealicense.com/licenses/mit/)
6,025
federicopascual/finetuning-sentiment-model-3000-samples
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8734177215189873 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3404 - Accuracy: 0.8667 - F1: 0.8734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
1,522
Elron/bleurt-large-512
[ "LABEL_0" ]
## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-512") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-512") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([0.9877, 0.0475]) ```
998
howey/roberta-large-sst2
null
Entry not found
15
madhurjindal/autonlp-Gibberish-Detector-492513457
[ "clean", "mild gibberish", "noise", "word salad" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - madhurjindal/autonlp-data-Gibberish-Detector co2_eq_emissions: 5.527544460835904 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 492513457 - CO2 Emissions (in grams): 5.527544460835904 ## Validation Metrics - Loss: 0.07609463483095169 - Accuracy: 0.9735624586913417 - Macro F1: 0.9736173135739408 - Micro F1: 0.9735624586913417 - Weighted F1: 0.9736173135739408 - Macro Precision: 0.9737771415197378 - Micro Precision: 0.9735624586913417 - Weighted Precision: 0.9737771415197378 - Macro Recall: 0.9735624586913417 - Micro Recall: 0.9735624586913417 - Weighted Recall: 0.9735624586913417 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/madhurjindal/autonlp-Gibberish-Detector-492513457 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("madhurjindal/autonlp-Gibberish-Detector-492513457", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("madhurjindal/autonlp-Gibberish-Detector-492513457", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,425
helliun/polhol
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
wukevin/tcr-bert
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28", "LABEL_29", "LABEL_3", "LABEL_30", "LABEL_31", "LABEL_32", "LABEL_33", "LABEL_34", "LABEL_35", "LABEL_36", "LABEL_37", "LABEL_38", "LABEL_39", "LABEL_4", "LABEL_40", "LABEL_41", "LABEL_42", "LABEL_43", "LABEL_44", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
# TCR transformer model See our full [codebase](https://github.com/wukevin/tcr-bert) and our [preprint](https://www.biorxiv.org/content/10.1101/2021.11.18.469186v1) for more information. This model is on: - Masked language modeling (masked amino acid or MAA modeling) - Classification across antigen labels from PIRD If you are looking for a model trained only on MAA, please see our [other model](https://huggingface.co/wukevin/tcr-bert-mlm-only). Example inputs: * `C A S S P V T G G I Y G Y T F` (binds to NLVPMVATV CMV antigen) * `C A T S G R A G V E Q F F` (binds to GILGFVFTL flu antigen)
600
Narrativaai/fake-news-detection-spanish
[ "REAL", "FAKE" ]
--- language: es tags: - generated_from_trainer - fake - news - competition datasets: - fakedes widget: - text: 'La palabra "haiga", aceptada por la RAE [SEP] La palabra "haiga", aceptada por la RAE La Real Academia de la Lengua (RAE), ha aceptado el uso de "HAIGA", para su utilización en las tres personas del singular del presente del subjuntivo del verbo hacer, aunque asegura que la forma más recomendable en la lengua culta para este tiempo, sigue siendo "haya". Así lo han confirmado fuentes de la RAE, que explican que este cambio ha sido propuesto y aprobado por el pleno de la Academia de la Lengua, tras la extendida utilización por todo el territorio nacional, sobre todo, empleado por personas carentes de estudios o con estudios básicos de graduado escolar. Ya no será objeto de burla ese compañero que a diario repite aquello de "Mientras que haiga faena, no podemos quejarnos" o esa abuela que repite aquello de "El que haiga sacao los juguetes, que los recoja". Entre otras palabras novedosas que ha aceptado la RAE, contamos también con "Descambiar", significa deshacer un cambio, por ejemplo "devolver la compra". Visto lo visto, nadie apostaría que la palabra "follamigos" sea la siguiente de la lista.' metrics: - f1 - accuracy model-index: - name: roberta-large-fake-news-detection-spanish results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RoBERTa-large-fake-news-detection-spanish This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on an [Spanish Fake News Dataset](https://sites.google.com/view/iberlef2020/#h.p_w0c31bn0r-SW). It achieves the following results on the evaluation set: - Loss: 1.7474 - F1: **0.7717** - Accuracy: 0.7797 > So, based on the [leaderboard](https://sites.google.com/view/fakedes/results?authuser=0) our model **outperforms** the best model (scores F1 = 0.7666). ## Model description RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. ## Intended uses & limitations The objective of this task is to decide if a news item is fake or real by analyzing its textual representation. ## Training and evaluation data **FakeDeS**: [Fake News Detection in Spanish Shared Task](https://sites.google.com/view/fakedes/home) Fake news provides information that aims to manipulate people for different purposes: terrorism, political elections, advertisement, satire, among others. In social networks, misinformation extends in seconds among thousands of people, so it is necessary to develop tools that help control the amount of false information on the web. Similar tasks are detection of popularity in social networks and detection of subjectivity of messages in this media. A fake news detection system aims to help users detect and filter out potentially deceptive news. The prediction of intentionally misleading news is based on the analysis of truthful and fraudulent previously reviewed news, i.e., annotated corpora. The Spanish Fake News Corpus is a collection of news compiled from several web sources: established newspapers websites,media companies websites, special websites dedicated to validating fake news, websites designated by different journalists as sites that regularly publish fake news. The news were collected from January to July of 2018 and all of them were written in Mexican Spanish. The corpus has 971 news collected from January to July, 2018, from different sources: - Established newspapers websites, - Media companies websites, - Special websites dedicated to validating fake news, - Websites designated by different journalists as sites that regularly publish fake news. The corpus was tagged considering only two classes (true or fake), following a manual labeling process: - A news is true if there is evidence that it has been published in reliable sites. - A news is fake if there is news from reliable sites or specialized website in detection of deceptive content that contradicts it or no other evidence was found about the news besides the source. - We collected the true-fake news pair of an event so there is a correlation of news in the corpus. In order to avoid topic bias, the corpus covers news from 9 different topics: Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society. As it can be seen in the table below, the number of fake and true news is quite balanced. Approximately 70% will be used as training corpus (676 news), and the 30% as testing corpus (295 news). The training corpus contains the following information: - Category: Fake/ True - Topic: Science/ Sport/ Economy/ Education/ Entertainment/ Politics, Health/ Security/ Society - Headline: The title of the news. - Text: The complete text of the news. - Link: The URL where the news was published. More information needed ## Training procedure TBA ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | No log | 1.0 | 243 | 0.6282 | 0.7513 | 0.75 | | No log | 2.0 | 486 | 0.9600 | 0.7346 | 0.7587 | | 0.5099 | 3.0 | 729 | 1.2128 | 0.7656 | 0.7570 | | 0.5099 | 4.0 | 972 | 1.4001 | 0.7606 | 0.7622 | | 0.1949 | 5.0 | 1215 | 1.9748 | 0.6475 | 0.7220 | | 0.1949 | 6.0 | 1458 | 1.7386 | 0.7706 | 0.7710 | | 0.0263 | 7.0 | 1701 | 1.7474 | 0.7717 | 0.7797 | | 0.0263 | 8.0 | 1944 | 1.8114 | 0.7695 | 0.7780 | | 0.0046 | 9.0 | 2187 | 1.8444 | 0.7709 | 0.7797 | | 0.0046 | 10.0 | 2430 | 1.8552 | 0.7709 | 0.7797 | ### Fast usage with HF `pipelines` ```python from transformers import pipeline ckpt = "Narrativaai/fake-news-detection-spanish" classifier = pipeline("text-classification", model=ckpt) headline = "Your headline" text = "Your article text here..." classifier(headline + " [SEP] " + text) ``` ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3 Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
7,116
TehranNLP-org/bert-base-uncased-cls-sst2
null
Entry not found
15
savasy/bert-turkish-text-classification
[ "world", "economy", "culture", "health", "politics", "sport", "technology" ]
--- language: tr --- # Turkish Text Classification This model is a fine-tune model of https://github.com/stefan-it/turkish-bert by using text classification data where there are 7 categories as follows ``` code_to_label={ 'LABEL_0': 'dunya ', 'LABEL_1': 'ekonomi ', 'LABEL_2': 'kultur ', 'LABEL_3': 'saglik ', 'LABEL_4': 'siyaset ', 'LABEL_5': 'spor ', 'LABEL_6': 'teknoloji '} ``` ## Data The following Turkish benchmark dataset is used for fine-tuning https://www.kaggle.com/savasy/ttc4900 ## Quick Start Bewgin with installing transformers as follows > pip install transformers ``` # Code: # import libraries from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer, AutoModelForSequenceClassification tokenizer= AutoTokenizer.from_pretrained("savasy/bert-turkish-text-classification") # build and load model, it take time depending on your internet connection model= AutoModelForSequenceClassification.from_pretrained("savasy/bert-turkish-text-classification") # make pipeline nlp=pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) # apply model nlp("bla bla") # [{'label': 'LABEL_2', 'score': 0.4753005802631378}] code_to_label={ 'LABEL_0': 'dunya ', 'LABEL_1': 'ekonomi ', 'LABEL_2': 'kultur ', 'LABEL_3': 'saglik ', 'LABEL_4': 'siyaset ', 'LABEL_5': 'spor ', 'LABEL_6': 'teknoloji '} code_to_label[nlp("bla bla")[0]['label']] # > 'kultur ' ``` ## How the model was trained ``` ## loading data for Turkish text classification import pandas as pd # https://www.kaggle.com/savasy/ttc4900 df=pd.read_csv("7allV03.csv") df.columns=["labels","text"] df.labels=pd.Categorical(df.labels) traind_df=... eval_df=... # model from simpletransformers.classification import ClassificationModel import torch,sklearn model_args = { "use_early_stopping": True, "early_stopping_delta": 0.01, "early_stopping_metric": "mcc", "early_stopping_metric_minimize": False, "early_stopping_patience": 5, "evaluate_during_training_steps": 1000, "fp16": False, "num_train_epochs":3 } model = ClassificationModel( "bert", "dbmdz/bert-base-turkish-cased", use_cuda=cuda_available, args=model_args, num_labels=7 ) model.train_model(train_df, acc=sklearn.metrics.accuracy_score) ``` For other training models please check https://simpletransformers.ai/ For the detailed usage of Turkish Text Classification please check [python notebook](https://github.com/savasy/TurkishTextClassification/blob/master/Bert_base_Text_Classification_for_Turkish.ipynb)
2,564
GeniusVoice/tinybertje-msmarco-finetuned
[ "LABEL_0" ]
Entry not found
15
JP040/bert-german-sentiment-twitter
[ "negative", "neutral", "positive" ]
Entry not found
15
Mithil/RobertaAmazonTrained
null
--- license: other ---
23
yangheng/deberta-v3-base-absa-v1.1
[ "Negative", "Neutral", "Positive" ]
--- language: - en tags: - aspect-based-sentiment-analysis - PyABSA license: mit datasets: - laptop14 - restaurant14 - restaurant16 - ACL-Twitter - MAMS - Television - TShirt - Yelp metrics: - accuracy - macro-f1 widget: - text: "[CLS] when tables opened up, the manager sat another party before us. [SEP] manager [SEP] " --- # Note This model is training with 30k+ ABSA samples, see [ABSADatasets](https://github.com/yangheng95/ABSADatasets). Yet the test sets are not included in pre-training, so you can use this model for training and benchmarking on common ABSA datasets, e.g., Laptop14, Rest14 datasets. (Except for the Rest15 dataset!) # DeBERTa for aspect-based sentiment analysis The `deberta-v3-base-absa` model for aspect-based sentiment analysis, trained with English datasets from [ABSADatasets](https://github.com/yangheng95/ABSADatasets). ## Training Model This model is trained based on the FAST-LCF-BERT model with `microsoft/deberta-v3-base`, which comes from [PyABSA](https://github.com/yangheng95/PyABSA). To track state-of-the-art models, please see [PyASBA](https://github.com/yangheng95/PyABSA). ## Usage ```python3 from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa-v1.1") model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-base-absa-v1.1") ``` ## Example in PyASBA An [example](https://github.com/yangheng95/PyABSA/blob/release/demos/aspect_polarity_classification/train_apc_multilingual.py) for using FAST-LCF-BERT in PyASBA datasets. ## Datasets This model is fine-tuned with 180k examples for the ABSA dataset (including augmented data). Training dataset files: ``` loading: integrated_datasets/apc_datasets/SemEval/laptop14/Laptops_Train.xml.seg loading: integrated_datasets/apc_datasets/SemEval/restaurant14/Restaurants_Train.xml.seg loading: integrated_datasets/apc_datasets/SemEval/restaurant16/restaurant_train.raw loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/train.raw loading: integrated_datasets/apc_datasets/MAMS/train.xml.dat loading: integrated_datasets/apc_datasets/Television/Television_Train.xml.seg loading: integrated_datasets/apc_datasets/TShirt/Menstshirt_Train.xml.seg loading: integrated_datasets/apc_datasets/Yelp/yelp.train.txt ``` If you use this model in your research, please cite our paper: ``` @article{YangZMT21, author = {Heng Yang and Biqing Zeng and Mayi Xu and Tianxing Wang}, title = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable Sentiment Dependency Learning}, journal = {CoRR}, volume = {abs/2110.08604}, year = {2021}, url = {https://arxiv.org/abs/2110.08604}, eprinttype = {arXiv}, eprint = {2110.08604}, timestamp = {Fri, 22 Oct 2021 13:33:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2110-08604.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3,142
cointegrated/rubert-base-cased-nli-twoway
[ "entailment", "not_entailment" ]
--- language: ru pipeline_tag: zero-shot-classification tags: - rubert - russian - nli - rte - zero-shot-classification widget: - text: "Я хочу поехать в Австралию" candidate_labels: "спорт,путешествия,музыка,кино,книги,наука,политика" hypothesis_template: "Тема текста - {}." --- # RuBERT for NLI (natural language inference) This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment or not entailment. For more details, see the card for a similar model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
652
dtomas/roberta-base-bne-irony
null
--- language: - es tags: - irony - sarcasm - spanish widget: - text: "¡Cómo disfruto peleándome con los Transformers!" example_title: "Ironic" - text: "Madrid es la capital de España" example_title: "Non ironic" --- # RoBERTa base finetuned for Spanish irony detection ## Model description Model to perform irony detection in Spanish. This is a finetuned version of the [RoBERTa-base-bne model](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the [IroSvA](https://www.autoritas.net/IroSvA2019/) corpus. Only the Spanish from Spain variant was used in the training process. It comprises 2,400 tweets labeled as ironic/non-ironic.
660
tals/albert-base-vitaminc_wnei-fever
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: python datasets: - fever - glue - tals/vitaminc --- # Details Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`). For more details see: https://github.com/TalSchuster/VitaminC When using this model, please cite the paper. # BibTeX entry and citation info ```bibtex @inproceedings{schuster-etal-2021-get, title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence", author = "Schuster, Tal and Fisch, Adam and Barzilay, Regina", booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.naacl-main.52", doi = "10.18653/v1/2021.naacl-main.52", pages = "624--643", abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.", } ```
2,357
cambridgeltl/trans-encoder-cross-simcse-roberta-base
[ "LABEL_0" ]
Entry not found
15
nickprock/distilbert-base-uncased-banking77-classification
[ "activate_my_card", "age_limit", "card_acceptance", "card_arrival", "card_delivery_estimate", "card_linking", "card_not_working", "card_payment_fee_charged", "card_payment_not_recognised", "card_payment_wrong_exchange_rate", "card_swallowed", "cash_withdrawal_charge", "apple_pay_or_google_pay", "cash_withdrawal_not_recognised", "change_pin", "compromised_card", "contactless_not_working", "country_support", "declined_card_payment", "declined_cash_withdrawal", "declined_transfer", "direct_debit_payment_not_recognised", "disposable_card_limits", "atm_support", "edit_personal_details", "exchange_charge", "exchange_rate", "exchange_via_app", "extra_charge_on_statement", "failed_transfer", "fiat_currency_support", "get_disposable_virtual_card", "get_physical_card", "getting_spare_card", "automatic_top_up", "getting_virtual_card", "lost_or_stolen_card", "lost_or_stolen_phone", "order_physical_card", "passcode_forgotten", "pending_card_payment", "pending_cash_withdrawal", "pending_top_up", "pending_transfer", "pin_blocked", "balance_not_updated_after_bank_transfer", "receiving_money", "Refund_not_showing_up", "request_refund", "reverted_card_payment?", "supported_cards_and_currencies", "terminate_account", "top_up_by_bank_transfer_charge", "top_up_by_card_charge", "top_up_by_cash_or_cheque", "top_up_failed", "balance_not_updated_after_cheque_or_cash_deposit", "top_up_limits", "top_up_reverted", "topping_up_by_card", "transaction_charged_twice", "transfer_fee_charged", "transfer_into_account", "transfer_not_received_by_recipient", "transfer_timing", "unable_to_verify_identity", "verify_my_identity", "beneficiary_not_allowed", "verify_source_of_funds", "verify_top_up", "virtual_card_not_working", "visa_or_mastercard", "why_verify_identity", "wrong_amount_of_cash_received", "wrong_exchange_rate_for_cash_withdrawal", "cancel_transfer", "card_about_to_expire" ]
--- license: mit tags: - generated_from_trainer datasets: - banking77 metrics: - accuracy model-index: - name: distilbert-base-uncased-banking77-classification results: - task: name: Text Classification type: text-classification dataset: name: banking77 type: banking77 args: default metrics: - name: Accuracy type: accuracy value: 0.924025974025974 - task: type: text-classification name: Text Classification dataset: name: banking77 type: banking77 config: default split: test metrics: - name: Accuracy type: accuracy value: 0.924025974025974 verified: true - name: Precision Macro type: precision value: 0.9278003086307286 verified: true - name: Precision Micro type: precision value: 0.924025974025974 verified: true - name: Precision Weighted type: precision value: 0.9278003086307287 verified: true - name: Recall Macro type: recall value: 0.9240259740259743 verified: true - name: Recall Micro type: recall value: 0.924025974025974 verified: true - name: Recall Weighted type: recall value: 0.924025974025974 verified: true - name: F1 Macro type: f1 value: 0.9243068139192414 verified: true - name: F1 Micro type: f1 value: 0.924025974025974 verified: true - name: F1 Weighted type: f1 value: 0.9243068139192416 verified: true - name: loss type: loss value: 0.31516405940055847 verified: true widget: - text: 'Can I track the card you sent to me? ' example_title: Card Arrival Example - text: Can you explain your exchange rate policy to me? example_title: Exchange Rate Example - text: I can't pay by my credit card example_title: Card Not Working Example --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-banking77-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the banking77 dataset. It achieves the following results on the evaluation set: - Loss: 0.3152 - Accuracy: 0.9240 - F1 Score: 0.9243 ## Model description This is my first fine-tuning experiment using Hugging Face. Using distilBERT as a pretrained model, I trained a classifier for online banking queries. It could be useful for addressing tickets. ## Intended uses & limitations The model can be used on text classification. In particular is fine tuned on banking domain. ## Training and evaluation data The dataset used is [banking77](https://huggingface.co/datasets/banking77) The 77 labels are: |label|intent| |:---:|:----:| |0|activate_my_card| |1|age_limit| |2|apple_pay_or_google_pay| |3|atm_support| |4|automatic_top_up| |5|balance_not_updated_after_bank_transfer| |6|balance_not_updated_after_cheque_or_cash_deposit| |7|beneficiary_not_allowed| |8|cancel_transfer| |9|card_about_to_expire| |10|card_acceptance| |11|card_arrival| |12|card_delivery_estimate| |13|card_linking| |14|card_not_working| |15|card_payment_fee_charged| |16|card_payment_not_recognised| |17|card_payment_wrong_exchange_rate| |18|card_swallowed| |19|cash_withdrawal_charge| |20|cash_withdrawal_not_recognised| |21|change_pin| |22|compromised_card| |23|contactless_not_working| |24|country_support| |25|declined_card_payment| |26|declined_cash_withdrawal| |27|declined_transfer| |28|direct_debit_payment_not_recognised| |29|disposable_card_limits| |30|edit_personal_details| |31|exchange_charge| |32|exchange_rate| |33|exchange_via_app| |34|extra_charge_on_statement| |35|failed_transfer| |36|fiat_currency_support| |37|get_disposable_virtual_card| |38|get_physical_card| |39|getting_spare_card| |40|getting_virtual_card| |41|lost_or_stolen_card| |42|lost_or_stolen_phone| |43|order_physical_card| |44|passcode_forgotten| |45|pending_card_payment| |46|pending_cash_withdrawal| |47|pending_top_up| |48|pending_transfer| |49|pin_blocked| |50|receiving_money| |51|Refund_not_showing_up| |52|request_refund| |53|reverted_card_payment?| |54|supported_cards_and_currencies| |55|terminate_account| |56|top_up_by_bank_transfer_charge| |57|top_up_by_card_charge| |58|top_up_by_cash_or_cheque| |59|top_up_failed| |60|top_up_limits| |61|top_up_reverted| |62|topping_up_by_card| |63|transaction_charged_twice| |64|transfer_fee_charged| |65|transfer_into_account| |66|transfer_not_received_by_recipient| |67|transfer_timing| |68|unable_to_verify_identity| |69|verify_my_identity| |70|verify_source_of_funds| |71|verify_top_up| |72|virtual_card_not_working| |73|visa_or_mastercard| |74|why_verify_identity| |75|wrong_amount_of_cash_received| |76|wrong_exchange_rate_for_cash_withdrawal| ## Training procedure ``` from transformers import pipeline pipe = pipeline("text-classification", model="nickprock/distilbert-base-uncased-banking77-classification") pipe("I can't pay by my credit card") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 3.8732 | 1.0 | 157 | 3.1476 | 0.5370 | 0.4881 | | 2.5598 | 2.0 | 314 | 1.9780 | 0.6916 | 0.6585 | | 1.5863 | 3.0 | 471 | 1.2239 | 0.8042 | 0.7864 | | 0.9829 | 4.0 | 628 | 0.8067 | 0.8565 | 0.8487 | | 0.6274 | 5.0 | 785 | 0.5837 | 0.8799 | 0.8752 | | 0.4304 | 6.0 | 942 | 0.4630 | 0.9042 | 0.9040 | | 0.3106 | 7.0 | 1099 | 0.3982 | 0.9088 | 0.9087 | | 0.2238 | 8.0 | 1256 | 0.3587 | 0.9110 | 0.9113 | | 0.1708 | 9.0 | 1413 | 0.3351 | 0.9208 | 0.9208 | | 0.1256 | 10.0 | 1570 | 0.3242 | 0.9179 | 0.9182 | | 0.0981 | 11.0 | 1727 | 0.3136 | 0.9211 | 0.9214 | | 0.0745 | 12.0 | 1884 | 0.3151 | 0.9211 | 0.9213 | | 0.0601 | 13.0 | 2041 | 0.3089 | 0.9218 | 0.9220 | | 0.0482 | 14.0 | 2198 | 0.3158 | 0.9214 | 0.9216 | | 0.0402 | 15.0 | 2355 | 0.3126 | 0.9224 | 0.9226 | | 0.0344 | 16.0 | 2512 | 0.3143 | 0.9231 | 0.9233 | | 0.0298 | 17.0 | 2669 | 0.3156 | 0.9231 | 0.9233 | | 0.0272 | 18.0 | 2826 | 0.3134 | 0.9244 | 0.9247 | | 0.0237 | 19.0 | 2983 | 0.3156 | 0.9244 | 0.9246 | | 0.0229 | 20.0 | 3140 | 0.3152 | 0.9240 | 0.9243 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
7,157
sagorsarker/codeswitch-spaeng-sentiment-analysis-lince
null
--- language: - es - en datasets: - lince license: mit tags: - codeswitching - spanish-english - sentiment-analysis --- # codeswitch-spaeng-sentiment-analysis-lince This is a pretrained model for **Sentiment Analysis** of `spanish-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home) This model is trained for this below repository. [https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch) To install codeswitch: ``` pip install codeswitch ``` ## Sentiment Analysis of Spanish-English Code-Mixed Data * **Method-1** ```py from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-spaeng-sentiment-analysis-lince") model = AutoModelForSequenceClassification.from_pretrained("sagorsarker/codeswitch-spaeng-sentiment-analysis-lince") nlp = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) sentence = "El perro le ladraba a La Gatita .. .. lol #teamlagatita en las playas de Key Biscayne este Memorial day" nlp(sentence) ``` * **Method-2** ```py from codeswitch.codeswitch import SentimentAnalysis sa = SentimentAnalysis('spa-eng') sentence = "El perro le ladraba a La Gatita .. .. lol #teamlagatita en las playas de Key Biscayne este Memorial day" result = sa.analyze(sentence) print(result) ```
1,369
MoritzLaurer/DeBERTa-v3-base-mnli
[ "contradiction", "entailment", "neutral" ]
--- language: - en tags: - text-classification - zero-shot-classification metrics: - accuracy pipeline_tag: zero-shot-classification --- # DeBERTa-v3-base-mnli-fever-anli ## Model description This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs. The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf). For a more powerful model, check out [DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) which was trained on even more data. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/DeBERTa-v3-base-mnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs. ### Training procedure DeBERTa-v3-base-mnli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the matched test set and achieves 0.90 accuracy. ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.
3,257
apple/ane-distilbert-base-uncased-finetuned-sst-2-english
[ "NEGATIVE", "POSITIVE" ]
--- language: en license: apache-2.0 datasets: - sst2 --- # DistilBERT optimized for Apple Neural Engine This is the [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model, optimized for the Apple Neural Engine (ANE) as described in the article [Deploying Transformers on the Apple Neural Engine](https://machinelearning.apple.com/research/neural-engine-transformers). The source code is taken from Apple's [ml-ane-transformers](https://github.com/apple/ml-ane-transformers) GitHub repo, modified slightly to make it usable from the 🤗 Transformers library. For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased). ## How to use Usage example: ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer model_checkpoint = "apple/ane-distilbert-base-uncased-finetuned-sst-2-english" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSequenceClassification.from_pretrained( model_checkpoint, trust_remote_code=True, return_dict=False, ) inputs = tokenizer( ["The Neural Engine is really fast"], return_tensors="pt", max_length=128, padding="max_length", ) with torch.no_grad(): outputs = model(**inputs) ``` ## Using the model with Core ML PyTorch does not utilize the ANE, and running this version of the model with PyTorch on the CPU or GPU may actually be slower than the original. To take advantage of the hardware acceleration of the ANE, use the Core ML version of the model, **DistilBERT_fp16.mlpackage**. Core ML usage example from Python: ```python import coremltools as ct mlmodel = ct.models.MLModel("DistilBERT_fp16.mlpackage") inputs = tokenizer( ["The Neural Engine is really fast"], return_tensors="np", max_length=128, padding="max_length", ) outputs_coreml = mlmodel.predict({ "input_ids": inputs["input_ids"].astype(np.int32), "attention_mask": inputs["attention_mask"].astype(np.int32), }) ``` To use the model from Swift, you will need to tokenize the input yourself according to the BERT rules. You can find a Swift implementation of the [BERT tokenizer here](https://github.com/huggingface/swift-coreml-transformers).
2,322
cross-encoder/nli-deberta-v3-small
[ "contradiction", "entailment", "neutral" ]
--- language: en pipeline_tag: zero-shot-classification tags: - microsoft/deberta-v3-small datasets: - multi_nli - snli metrics: - accuracy license: apache-2.0 --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 91.65 - Accuracy on MNLI mismatched set: 87.55 For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-v3-small') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-small') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-small') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-small') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
2,784
kornosk/bert-election2020-twitter-stance-biden-KE-MLM
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: "en" tags: - twitter - stance-detection - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (KE-MLM) Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden. # Usage This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden-KE-MLM" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path) id2label = { 0: "AGAINST", 1: "FAVOR", 2: "NONE" } ##### Prediction Neutral ##### sentence = "Hello World." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Favor ##### sentence = "Go Go Biden!!!" inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Against ##### sentence = "Biden is the worst." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) # please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
3,602
cambridgeltl/trans-encoder-cross-simcse-roberta-large
[ "LABEL_0" ]
Entry not found
15
uer/roberta-base-finetuned-jd-binary-chinese
[ "negative (stars 1, 2 and 3)", "positive (stars 4 and 5)" ]
--- language: zh widget: - text: "这本书真的很不错" --- # Chinese RoBERTa-Base Models for Text Classification ## Model description This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://arxiv.org/abs/1909.05658). You can download the 5 Chinese RoBERTa-Base classification models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo) (in UER-py format), or via HuggingFace from the links below: | Dataset | Link | | :-----------: | :-------------------------------------------------------: | | **JD full** | [**roberta-base-finetuned-jd-full-chinese**][jd_full] | | **JD binary** | [**roberta-base-finetuned-jd-binary-chinese**][jd_binary] | | **Dianping** | [**roberta-base-finetuned-dianping-chinese**][dianping] | | **Ifeng** | [**roberta-base-finetuned-ifeng-chinese**][ifeng] | | **Chinanews** | [**roberta-base-finetuned-chinanews-chinese**][chinanews] | ## How to use You can use this model directly with a pipeline for text classification (take the case of roberta-base-finetuned-chinanews-chinese): ```python >>> from transformers import AutoModelForSequenceClassification,AutoTokenizer,pipeline >>> model = AutoModelForSequenceClassification.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese') >>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese') >>> text_classification = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) >>> text_classification("北京上个月召开了两会") [{'label': 'mainland China politics', 'score': 0.7211663722991943}] ``` ## Training data 5 Chinese text classification datasets are used. JD full, JD binary, and Dianping datasets consist of user reviews of different sentiment polarities. Ifeng and Chinanews consist of first paragraphs of news articles of different topic classes. They are collected by [Glyph](https://github.com/zhangxiangxiao/glyph) project and more details are discussed in corresponding [paper](https://arxiv.org/abs/1708.02657). ## Training procedure Models are fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved. We use the same hyper-parameters on different models. Taking the case of roberta-base-finetuned-chinanews-chinese ``` python3 run_classifier.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \ --vocab_path models/google_zh_vocab.txt \ --train_path datasets/glyph/chinanews/train.tsv \ --dev_path datasets/glyph/chinanews/dev.tsv \ --output_model_path models/chinanews_classifier_model.bin \ --learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512 ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_bert_text_classification_from_uer_to_huggingface.py --input_model_path models/chinanews_classifier_model.bin \ --output_model_path pytorch_model.bin \ --layers_num 12 ``` ### BibTeX entry and citation info ``` @article{devlin2018bert, title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding}, author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1810.04805}, year={2018} } @article{liu2019roberta, title={Roberta: A robustly optimized bert pretraining approach}, author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1907.11692}, year={2019} } @article{zhang2017encoding, title={Which encoding is the best for text classification in chinese, english, japanese and korean?}, author={Zhang, Xiang and LeCun, Yann}, journal={arXiv preprint arXiv:1708.02657}, year={2017} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ``` [jd_full]:https://huggingface.co/uer/roberta-base-finetuned-jd-full-chinese [jd_binary]:https://huggingface.co/uer/roberta-base-finetuned-jd-binary-chinese [dianping]:https://huggingface.co/uer/roberta-base-finetuned-dianping-chinese [ifeng]:https://huggingface.co/uer/roberta-base-finetuned-ifeng-chinese [chinanews]:https://huggingface.co/uer/roberta-base-finetuned-chinanews-chinese
5,141
w11wo/indonesian-roberta-base-sentiment-classifier
[ "negative", "neutral", "positive" ]
--- language: id tags: - indonesian-roberta-base-sentiment-classifier license: mit datasets: - indonlu widget: - text: "Jangan sampai saya telpon bos saya ya!" --- ## Indonesian RoBERTa Base Sentiment Classifier Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `SmSA` dataset consisting of Indonesian comments and reviews. After training, the model achieved an evaluation accuracy of 94.36% and F1-macro of 92.42%. On the benchmark test set, the model achieved an accuracy of 93.2% and F1-macro of 91.02%. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ---------------------------------------------- | ------- | ------------ | ------------------------------- | | `indonesian-roberta-base-sentiment-classifier` | 124M | RoBERTa Base | `SmSA` | ## Evaluation Results The model was trained for 5 epochs and the best model was loaded at the end. | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | | ----- | ------------- | --------------- | -------- | -------- | --------- | -------- | | 1 | 0.342600 | 0.213551 | 0.928571 | 0.898539 | 0.909803 | 0.890694 | | 2 | 0.190700 | 0.213466 | 0.934127 | 0.901135 | 0.925297 | 0.882757 | | 3 | 0.125500 | 0.219539 | 0.942857 | 0.920901 | 0.927511 | 0.915193 | | 4 | 0.083600 | 0.235232 | 0.943651 | 0.924227 | 0.926494 | 0.922048 | | 5 | 0.059200 | 0.262473 | 0.942063 | 0.920583 | 0.924084 | 0.917351 | ## How to Use ### As Text Classifier ```python from transformers import pipeline pretrained_name = "w11wo/indonesian-roberta-base-sentiment-classifier" nlp = pipeline( "sentiment-analysis", model=pretrained_name, tokenizer=pretrained_name ) nlp("Jangan sampai saya telpon bos saya ya!") ``` ## Disclaimer Do consider the biases which come from both the pre-trained RoBERTa model and the `SmSA` dataset that may be carried over into the results of this model. ## Author Indonesian RoBERTa Base Sentiment Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
2,842
edumunozsala/roberta_bne_sentiment_analysis_es
[ "Negativo", "Positivo" ]
--- language: es tags: - sagemaker - roberta-bne - TextClassification - SentimentAnalysis license: apache-2.0 datasets: - IMDbreviews_es metrics: - accuracy model-index: - name: roberta_bne_sentiment_analysis_es results: - task: name: Sentiment Analysis type: sentiment-analysis dataset: name: "IMDb Reviews in Spanish" type: IMDbreviews_es metrics: - name: Accuracy, type: accuracy, value: 0.9106666666666666 - name: F1 Score, type: f1, value: 0.9090909090909091 - name: Precision, type: precision, value: 0.9063852813852814 - name: Recall, type: recall, value: 0.9118127381600436 widget: - text: "Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal" --- # Model roberta_bne_sentiment_analysis_es ## **A finetuned model for Sentiment analysis in Spanish** This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container, The base model is **RoBERTa-base-bne** which is a RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB. It was trained by The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) **RoBERTa BNE Citation** Check out the paper for all the details: https://arxiv.org/abs/2107.07253 ``` @article{gutierrezfandino2022, author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas}, title = {MarIA: Spanish Language Models}, journal = {Procesamiento del Lenguaje Natural}, volume = {68}, number = {0}, year = {2022}, issn = {1989-7553}, url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405}, pages = {39--60} } ``` ## Dataset The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages. Sizes of datasets: - Train dataset: 42,500 - Validation dataset: 3,750 - Test dataset: 3,750 ## Intended uses & limitations This model is intented for Sentiment Analysis for spanish corpus and finetuned specially for movie reviews but it can be applied to other kind of reviews. ## Hyperparameters { "epochs": "4", "train_batch_size": "32", "eval_batch_size": "8", "fp16": "true", "learning_rate": "3e-05", "model_name": "\"PlanTL-GOB-ES/roberta-base-bne\"", "sagemaker_container_log_level": "20", "sagemaker_program": "\"train.py\"", } ## Evaluation results - Accuracy = 0.9106666666666666 - F1 Score = 0.9090909090909091 - Precision = 0.9063852813852814 - Recall = 0.9118127381600436 ## Test results ## Model in action ### Usage for Sentiment Analysis ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("edumunozsala/roberta_bne_sentiment_analysis_es") model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/roberta_bne_sentiment_analysis_es") text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal" input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) outputs = model(input_ids) output = outputs.logits.argmax(1) ``` Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
3,629
bespin-global/klue-roberta-small-3i4k-intent-classification
[ "command", "fragment", "intonation-depedent utterance", "question", "rhetorical command", "rhetorical question", "statement" ]
--- language: ko tags: - intent-classification datasets: - kor_3i4k license: cc-by-nc-4.0 --- ## Finetuning - Pretrain Model : [klue/roberta-small](https://github.com/KLUE-benchmark/KLUE) - Dataset for fine-tuning : [3i4k](https://github.com/warnikchow/3i4k) - Train : 46,863 - Validation : 8,271 (15% of Train) - Test : 6,121 - Label info - 0: "fragment", - 1: "statement", - 2: "question", - 3: "command", - 4: "rhetorical question", - 5: "rhetorical command", - 6: "intonation-dependent utterance" - Parameters of Training ``` { "epochs": 3 (setting 10 but early stopped), "batch_size":32, "optimizer_class": "<keras.optimizer_v2.adam.Adam'>", "optimizer_params": { "lr": 5e-05 }, "min_delta": 0.01 } ``` ## Usage ``` python from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, TextClassificationPipeline # Load fine-tuned MRC model by HuggingFace Model Hub HUGGINGFACE_MODEL_PATH = "bespin-global/klue-roberta-small-3i4k-intent-classification" loaded_tokenizer = RobertaTokenizerFast.from_pretrained(HUGGINGFACE_MODEL_PATH ) loaded_model = RobertaForSequenceClassification.from_pretrained(HUGGINGFACE_MODEL_PATH ) # using Pipeline text_classifier = TextClassificationPipeline( tokenizer=loaded_tokenizer, model=loaded_model, return_all_scores=True ) # predict text = "your text" preds_list = text_classifier(text) best_pred = preds_list[0] print(f"Label of Best Intentatioin: {best_pred['label']}") print(f"Score of Best Intentatioin: {best_pred['score']}") ``` ## Evaluation ``` precision recall f1-score support command 0.89 0.92 0.90 1296 fragment 0.98 0.96 0.97 600 intonation-depedent utterance 0.71 0.69 0.70 327 question 0.95 0.97 0.96 1786 rhetorical command 0.87 0.64 0.74 108 rhetorical question 0.61 0.63 0.62 174 statement 0.91 0.89 0.90 1830 accuracy 0.90 6121 macro avg 0.85 0.81 0.83 6121 weighted avg 0.90 0.90 0.90 6121 ``` ## Citing & Authors <!--- Describe where people can find more information --> [Jaehyeong](https://huggingface.co/jaehyeong) at [Bespin Global](https://www.bespinglobal.com/)
2,549
cambridgeltl/trans-encoder-cross-simcse-bert-base
[ "LABEL_0" ]
Entry not found
15
cambridgeltl/sst_mobilebert-uncased
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Capreolus/bert-base-msmarco
null
# capreolus/bert-base-msmarco ## Model description BERT-Base model (`google/bert_uncased_L-12_H-768_A-12`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model; see the [Capreolus BERT-MaxP implementation](https://github.com/capreolus-ir/capreolus/blob/master/capreolus/reranker/TFBERTMaxP.py) for a usage example. This corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in [PARADE: Passage Representation Aggregation for Document Reranking](https://arxiv.org/abs/2008.09093) by Li et al. It was converted from the released [TFv1 checkpoint](https://zenodo.org/record/3974431/files/vanilla_bert_base_on_MSMARCO.tar.gz). Please cite the PARADE paper if you use these weights.
778
patrickramosobf/bert-base-japanese-v2-wrime-fine-tune
[ "writer_joy", "writer_sadness", "reader_anticipation", "reader_surprise", "reader_anger", "reader_fear", "reader_disgust", "reader_trust", "writer_anticipation", "writer_surprise", "writer_anger", "writer_fear", "writer_disgust", "writer_trust", "reader_joy", "reader_sadness" ]
--- license: cc-by-sa-3.0 language: - ja tag: - emotion-analysis datasets: - wrime widget: - text: "車のタイヤがパンクしてた。。いたずらの可能性が高いんだって。。" --- # WRIME-fine-tuned BERT base Japanese This model is a [Japanese BERT<sub>BASE</sub>](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) fine-tuned on the [WRIME](https://github.com/ids-cv/wrime) dataset. It was trained as part of the paper ["Emotion Analysis of Writers and Readers of Japanese Tweets on Vaccinations"](https://aclanthology.org/2022.wassa-1.10/). Fine-tuning code is available at this [repo](https://github.com/PatrickJohnRamos/BERT-Japan-vaccination). # Intended uses and limitations This model can be used to predict intensities scores for eight emotions for writers and readers. Please refer to the `Fine-tuning data` section for the list of emotions. Because of the regression fine-tuning task, it is possible for the model to infer scores outside of the range of the scores of the fine-tuning data (`score < 0` or `score > 4`). # Model Architecture, Tokenization, and Pretraining The Japanese BERT<sub>BASE</sub> fine-tuned was `cl-tohoku/bert-base-japanese-v2`. Please refer to their [model card](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) for details regarding the model architecture, tokenization, pretraining data, and pretraining procedure. # Fine-tuning data The model is fine-tuned on [WRIME](https://github.com/ids-cv/wrime), a dataset of Japanese Tweets annotated with writer and reader emotion intensities. We use version 1 of the dataset. Each Tweet is accompanied by a set of writer emotion intensities (from the author of the Tweet) and three sets of reader emotions (from three annotators). The emotions follow Plutchhik's emotions, namely: * joy * sadness * anticipation * surprise * anger * fear * disgust * trust These emotion intensities follow a four-point scale: | emotion intensity | emotion presence| |---|---| | 0 | no | | 1 | weak | | 2 | medium | | 3 | strong | # Fine-tuning The BERT is fine-tuned to directly regress the emotion intensities of the writer and the averaged emotions of the readers from each Tweet, meaning there are 16 outputs (8 emotions per writer/reader). The fine-tuning was inspired by common BERT fine-tuning procedures. The BERT was fine-tuned on WRIME for 3 epochs using the AdamW optimizer with a learning rate of 2e-5, β<sub>1</sub>=0.9, β<sub>2</sub>=0.999, weight decay of 0.01, linear decay, a warmup ratio of 0.01, and a batch size of 32. Training was conducted with an NVIDIA Tesla K80 and finished in 3 hours. # Evaluation results Below are the MSEs of the BERT on the test split of WRIME. | Annotator | Joy | Sadness | Anticipation | Surprise | Anger | Fear | Disgust | Trust | Overall | |---|---|---|---|---|---|---|---|---|---| | Writer | 0.658 | 0.688 | 0.746 | 0.542 | 0.486 | 0.462 | 0.664 | 0.400 | 0.581 | | Reader | 0.192 | 0.178 | 0.211 | 0.139 | 0.032 | 0.147 | 0.123 | 0.029 | 0.131 | | Both | 0.425 | 0.433 | 0.479 | 0.341 | 0.259 | 0.304 | 0.394 | 0.214 | 0.356 |
3,030
tezign/BERT-LSTM-based-ABSA
[ "negative", "neutral", "positive" ]
--- language: en tags: - aspect-term-sentiment-analysis - pytorch - ATSA datasets: - semeval2014 widget: - text: "[CLS] The appearance is very nice, but the battery life is poor. [SEP] appearance [SEP] " --- # Note `Aspect term sentiment analysis` BERT LSTM based baseline, based on https://github.com/avinashsai/BERT-Aspect *BERT LSTM* implementation.The model trained on SemEval2014-Task 4 laptop and restaurant datasets. Our Github repo: https://github.com/tezignlab/BERT-LSTM-based-ABSA Code for the paper "Utilizing BERT Intermediate Layers for Aspect Based Sentiment Analysis and Natural Language Inference" https://arxiv.org/pdf/2002.04815.pdf. # Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline MODEL = "tezign/BERT-LSTM-based-ABSA" tokenizer = AutoTokenizer.from_pretrained(MODEL) model = AutoModelForSequenceClassification.from_pretrained(MODEL, trust_remote_code=True) classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer) result = classifier([ {"text": "The appearance is very nice, but the battery life is poor", "text_pair": "appearance"}, {"text": "The appearance is very nice, but the battery life is poor", "text_pair": "battery"} ], function_to_apply="softmax") print(result) """ print result >> [{'label': 'positive', 'score': 0.9129462838172913}, {'label': 'negative', 'score': 0.8834680914878845}] """ ```
1,443
akhooli/xlm-r-large-arabic-sent
[ "LABEL_0_mixed", "LABEL_1_neg", "LABEL_2_pos" ]
--- language: - ar - en license: mit --- ### xlm-r-large-arabic-sent Multilingual sentiment classification (Label_0: mixed, Label_1: negative, Label_2: positive) of Arabic reviews by fine-tuning XLM-Roberta-Large. Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Mixed category is not accurate and may confuse other classes (was based on a rate of 3 out of 5 in reviews). Usage: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
504
Abderrahim2/bert-finetuned-gender_classification
[ "female", "male", "undefined" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: bert-finetuned-gender_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-gender_classification This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1484 - F1: 0.9645 - Roc Auc: 0.9732 - Accuracy: 0.964 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:| | 0.1679 | 1.0 | 1125 | 0.1781 | 0.928 | 0.946 | 0.927 | | 0.1238 | 2.0 | 2250 | 0.1252 | 0.9516 | 0.9640 | 0.95 | | 0.0863 | 3.0 | 3375 | 0.1283 | 0.9515 | 0.9637 | 0.95 | | 0.0476 | 4.0 | 4500 | 0.1419 | 0.9565 | 0.9672 | 0.956 | | 0.0286 | 5.0 | 5625 | 0.1428 | 0.9555 | 0.9667 | 0.954 | | 0.0091 | 6.0 | 6750 | 0.1515 | 0.9604 | 0.9700 | 0.959 | | 0.0157 | 7.0 | 7875 | 0.1535 | 0.9580 | 0.9682 | 0.957 | | 0.0048 | 8.0 | 9000 | 0.1484 | 0.9645 | 0.9732 | 0.964 | | 0.0045 | 9.0 | 10125 | 0.1769 | 0.9605 | 0.9703 | 0.96 | | 0.0037 | 10.0 | 11250 | 0.2007 | 0.9565 | 0.9672 | 0.956 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
2,198
DemangeJeremy/4-sentiments-with-flaubert
[ "MIXED", "NEGATIVE", "OBJECTIVE", "POSITIVE" ]
--- language: fr tags: - sentiments - text-classification - flaubert - french - flaubert-large --- # Modèle de détection de 4 sentiments avec FlauBERT (mixed, negative, objective, positive) Les travaux sont actuellement en cours. Je modifierai le modèle ces prochains jours. ### Comment l'utiliser ? ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import pipeline loaded_tokenizer = AutoTokenizer.from_pretrained('flaubert/flaubert_large_cased') loaded_model = AutoModelForSequenceClassification.from_pretrained("DemangeJeremy/4-sentiments-with-flaubert") nlp = pipeline('sentiment-analysis', model=loaded_model, tokenizer=loaded_tokenizer) print(nlp("Je suis plutôt confiant.")) ``` ``` [{'label': 'OBJECTIVE', 'score': 0.3320835530757904}] ``` ## Résultats de l'évaluation du modèle | Epoch | Validation Loss | Samples Per Second | |:------:|:--------------:|:------------------:| | 1 | 2.219246 | 49.476000 | | 2 | 1.883753 | 47.259000 | | 3 | 1.747969 | 44.957000 | | 4 | 1.695606 | 43.872000 | | 5 | 1.641470 | 45.726000 | ## Citation Pour toute utilisation de ce modèle, merci d'utiliser cette citation : > Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <https://huggingface.co/DemangeJeremy/4-sentiments-with-flaubert>
1,432
michiyasunaga/LinkBERT-large
null
--- license: apache-2.0 language: en datasets: - wikipedia - bookcorpus tags: - bert - exbert - linkbert - feature-extraction - fill-mask - question-answering - text-classification - token-classification --- ## LinkBERT-large LinkBERT-large model pretrained on English Wikipedia articles along with hyperlink information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT). ## Model description LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document. LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval). ## Intended uses & limitations The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification. You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text). ### How to use To use the model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/LinkBERT-large') model = AutoModel.from_pretrained('michiyasunaga/LinkBERT-large') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases. ## Evaluation results When fine-tuned on downstream tasks, LinkBERT achieves the following results. **General benchmarks ([MRQA](https://github.com/mrqa/MRQA-Shared-Task-2019) and [GLUE](https://gluebenchmark.com/)):** | | HotpotQA | TriviaQA | SearchQA | NaturalQ | NewsQA | SQuAD | GLUE | | ---------------------- | -------- | -------- | -------- | -------- | ------ | ----- | -------- | | | F1 | F1 | F1 | F1 | F1 | F1 | Avg score | | BERT-base | 76.0 | 70.3 | 74.2 | 76.5 | 65.7 | 88.7 | 79.2 | | **LinkBERT-base** | **78.2** | **73.9** | **76.8** | **78.3** | **69.3** | **90.1** | **79.6** | | BERT-large | 78.1 | 73.7 | 78.3 | 79.0 | 70.9 | 91.1 | 80.7 | | **LinkBERT-large** | **80.8** | **78.2** | **80.5** | **81.0** | **72.6** | **92.7** | **81.1** | ## Citation If you find LinkBERT useful in your project, please cite the following: ```bibtex @InProceedings{yasunaga2022linkbert, author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang}, title = {LinkBERT: Pretraining Language Models with Document Links}, year = {2022}, booktitle = {Association for Computational Linguistics (ACL)}, } ```
3,547
tomh/toxigen_hatebert
null
--- language: - en tags: - text-classification --- Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar. This model comes from the paper [ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection](https://arxiv.org/abs/2203.09509) and can be used to detect implicit hate speech. Please visit the [Github Repository](https://github.com/microsoft/TOXIGEN) for the training dataset and further details. ```bibtex @inproceedings{hartvigsen2022toxigen, title = "{T}oxi{G}en: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection", author = "Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece", booktitle = "Proceedings of the 60th Annual Meeting of the Association of Computational Linguistics", year = "2022" } ```
904
HooshvareLab/bert-fa-base-uncased-clf-digimag
[ "بازی ویدیویی", "راهنمای خرید", "سلامت و زیبایی", "علم و تکنولوژی", "عمومی", "هنر و سینما", "کتاب و ادبیات" ]
--- language: fa license: apache-2.0 --- # ParsBERT (v2.0) A Transformer-based Model for Persian Language Understanding We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models. ## Persian Text Classification [DigiMag, Persian News] The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`. ### DigiMag A total of 8,515 articles scraped from [Digikala Online Magazine](https://www.digikala.com/mag/). This dataset includes seven different classes. 1. Video Games 2. Shopping Guide 3. Health Beauty 4. Science Technology 5. General 6. Art Cinema 7. Books Literature | Label | # | |:------------------:|:----:| | Video Games | 1967 | | Shopping Guide | 125 | | Health Beauty | 1610 | | Science Technology | 2772 | | General | 120 | | Art Cinema | 1667 | | Books Literature | 254 | **Download** You can download the dataset from [here](https://drive.google.com/uc?id=1YgrCYY-Z0h2z0-PfWVfOGt1Tv0JDI-qz) ## Results The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures. | Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | |:-----------------:|:-----------:|:-----------:|:-----:| | Digikala Magazine | 93.65* | 93.59 | 90.72 | ## How to use :hugs: | Task | Notebook | |---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Text Classification | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
2,728
cambridgeltl/trans-encoder-cross-simcse-bert-large
[ "LABEL_0" ]
Entry not found
15
Hate-speech-CNERG/dehatebert-mono-french
[ "NON_HATE", "HATE" ]
--- language: fr license: apache-2.0 --- This model is used detecting **hatespeech** in **French language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.692094 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT) ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
1,058
avichr/hebEMO_joy
null
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br> [Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={arXiv preprint arXiv:2102.01909}, year={2021} } ```
5,431
mrm8488/deberta-v3-large-finetuned-mnli
[ "contradiction", "entailment", "neutral" ]
--- language: - en license: mit widget: - text: "She was badly wounded already. Another spear would take her down." tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: deberta-v3-large-mnli-2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.8949349064279902 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DeBERTa-v3-large fine-tuned on MNLI This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6763 - Accuracy: 0.8949 ## Model description [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543). Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates. The DeBERTa V3 large model comes with 24 layers and a hidden size of 1024. It has 304M backbone parameters with a vocabulary containing 128K tokens which introduces 131M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.3676 | 1.0 | 24544 | 0.3761 | 0.8681 | | 0.2782 | 2.0 | 49088 | 0.3605 | 0.8881 | | 0.1986 | 3.0 | 73632 | 0.4672 | 0.8894 | | 0.1299 | 4.0 | 98176 | 0.5248 | 0.8967 | | 0.0643 | 5.0 | 122720 | 0.6489 | 0.8999 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
3,019
shahrukhx01/gbert-germeval-2021
null
--- language: "de" tags: - hate-speech-classification widget: - text: "Als jemand, der im real existierenden Sozialismus aufgewachsen ist, kann ich über George Weineberg nur sagen, dass er ein Voll...t ist. Finde es schon gut, dass der eingeladen wurde. Hat gezeigt, dass er viel Meinung hat, aber offensichtlich wenig Ahnung. Er hat sich eben so gut wie er kann, für alle sichtbar, zum Trottel gemacht" - text: "Sobald klar ist dass Trump die Wahl gewinnt liegen alle Deutschen Framing Journalisten im Sauerstoffzelt. Wegen extremer Schnappatmung. Das ist zwar hart, aber Fair!" --- # Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/gbert-germeval-2021") model = AutoModelForSequenceClassification.from_pretrained("shahrukhx01/gbert-germeval-2021") ``` # Dataset ```bibtext @proceedings{germeval-2021-germeval, title = "Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments", editor = "Risch, Julian and Stoll, Anke and Wilms, Lena and Wiegand, Michael", month = sep, year = "2021", address = "Duesseldorf, Germany", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.germeval-1.0", } ``` --- license: mit ---
1,407
dpalominop/spanish-bert-apoyo
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("dpalominop/spanish-bert-apoyo") model = AutoModelForSequenceClassification.from_pretrained("dpalominop/spanish-bert-apoyo") ```
256
gchhablani/bert-base-cased-finetuned-mnli
[ "contradiction", "entailment", "neutral" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - accuracy model-index: - name: bert-base-cased-finetuned-mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.8410292921074044 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-mnli This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5721 - Accuracy: 0.8410 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name mnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-mnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5323 | 1.0 | 24544 | 0.4431 | 0.8302 | | 0.3447 | 2.0 | 49088 | 0.4725 | 0.8353 | | 0.2267 | 3.0 | 73632 | 0.5887 | 0.8368 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
2,647
philschmid/distilbert-base-multilingual-cased-sentiment
[ "negative", "neutral", "positive" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy - f1 model-index: - name: distilbert-base-multilingual-cased-sentiment results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: all_languages metrics: - name: Accuracy type: accuracy value: 0.7648 - name: F1 type: f1 value: 0.7648 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-sentiment This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.5842 - Accuracy: 0.7648 - F1: 0.7648 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - distributed_type: sagemaker_data_parallel - num_devices: 8 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.6405 | 0.53 | 5000 | 0.5826 | 0.7498 | 0.7498 | | 0.5698 | 1.07 | 10000 | 0.5686 | 0.7612 | 0.7612 | | 0.5286 | 1.6 | 15000 | 0.5593 | 0.7636 | 0.7636 | | 0.5141 | 2.13 | 20000 | 0.5842 | 0.7648 | 0.7648 | | 0.4763 | 2.67 | 25000 | 0.5736 | 0.7637 | 0.7637 | | 0.4549 | 3.2 | 30000 | 0.6027 | 0.7593 | 0.7593 | | 0.4231 | 3.73 | 35000 | 0.6017 | 0.7552 | 0.7552 | | 0.3965 | 4.27 | 40000 | 0.6489 | 0.7551 | 0.7551 | | 0.3744 | 4.8 | 45000 | 0.6426 | 0.7534 | 0.7534 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
2,574
helliun/primary_or_secondary_v3
null
Entry not found
15
baykenney/bert-base-gpt2detector-topk40
[ "Human", "Machine" ]
Entry not found
15
cardiffnlp/bertweet-base-irony
null
0
finiteautomata/beto-headlines-sentiment-analysis
[ "NEG", "NEU", "POS" ]
# Targeted Sentiment Analysis in News Headlines BERT classifier fine-tuned in a news headlines dataset annotated for target polarity. (details to be published) ## Examples Input is as follows `Headline [SEP] Target` where headline is the news title and target is an entity present in the headline. Try `Alberto Fernández: "El gobierno de Macri fue un desastre" [SEP] Macri` (should be NEG) and `Alberto Fernández: "El gobierno de Macri fue un desastre" [SEP] Alberto Fernández` (POS or NEU)
502
Aniemore/rubert-tiny2-russian-emotion-detection
[ "anger", "disgust", "enthusiasm", "fear", "happiness", "neutral", "sadness" ]
--- license: gpl-3.0 language: ["ru"] tags: - russian - classification - emotion - emotion-detection - emotion-recognition - multiclass widget: - text: "Как дела?" - text: "Дурак твой дед" - text: "Только попробуй!!!" - text: "Не хочу в школу(" - text: "Сейчас ровно час дня" - text: "А ты уверен, что эти полоски снизу не врут? Точно уверен? Вот прям 100 процентов?" datasets: - Aniemore/cedr-m7 model-index: - name: RuBERT tiny2 For Russian Text Emotion Detection by Ilya Lubenets results: - task: name: Multilabel Text Classification type: multilabel-text-classification dataset: name: CEDR M7 type: Aniemore/cedr-m7 args: ru metrics: - name: multilabel accuracy type: accuracy value: 85% - task: name: Text Classification type: text-classification dataset: name: CEDR M7 type: Aniemore/cedr-m7 args: ru metrics: - name: accuracy type: accuracy value: 76% --- # First - you should prepare few functions to talk to model ```python import torch from transformers import BertForSequenceClassification, AutoTokenizer LABELS = ['neutral', 'happiness', 'sadness', 'enthusiasm', 'fear', 'anger', 'disgust'] tokenizer = AutoTokenizer.from_pretrained('Aniemore/rubert-tiny2-russian-emotion-detection') model = BertForSequenceClassification.from_pretrained('Aniemore/rubert-tiny2-russian-emotion-detection') @torch.no_grad() def predict_emotion(text: str) -> str: """ We take the input text, tokenize it, pass it through the model, and then return the predicted label :param text: The text to be classified :type text: str :return: The predicted emotion """ inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return LABELS[predicted[0]] @torch.no_grad() def predict_emotions(text: str) -> list: """ It takes a string of text, tokenizes it, feeds it to the model, and returns a dictionary of emotions and their probabilities :param text: The text you want to classify :type text: str :return: A dictionary of emotions and their probabilities. """ inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) emotions_list = {} for i in range(len(predicted.numpy()[0].tolist())): emotions_list[LABELS[i]] = predicted.numpy()[0].tolist()[i] return emotions_list ``` # And then - just gently ask a model to predict your emotion ```python simple_prediction = predict_emotion("Какой же сегодня прекрасный день, братья") not_simple_prediction = predict_emotions("Какой же сегодня прекрасный день, братья") print(simple_prediction) print(not_simple_prediction) # happiness # {'neutral': 0.0004941817605867982, 'happiness': 0.9979524612426758, 'sadness': 0.0002536600804887712, 'enthusiasm': 0.0005498139653354883, 'fear': 0.00025326196919195354, 'anger': 0.0003583927755244076, 'disgust': 0.00013807788491249084} ``` # Or, just simply use [our package (GitHub)](https://github.com/aniemore/Aniemore), that can do whatever you want (or maybe not) 🤗 # Citations ``` @misc{Aniemore, author = {Артем Аментес, Илья Лубенец, Никита Давидчук}, title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека}, year = {2022}, publisher = {Hugging Face}, journal = {Hugging Face Hub}, howpublished = {\url{https://huggingface.com/aniemore/Aniemore}}, email = {hello@socialcode.ru} } ```
3,810
moussaKam/barthez-sentiment-classification
null
--- tags: - text-classification - bart language: - fr license: apache-2.0 widget: - text: Barthez est le meilleur gardien du monde. --- ### Barthez model finetuned on opinion classification task. paper: https://arxiv.org/abs/2010.12321 \ github: https://github.com/moussaKam/BARThez ``` @article{eddine2020barthez, title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model}, author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis}, journal={arXiv preprint arXiv:2010.12321}, year={2020} } ```
545
yangheng/deberta-v3-large-absa-v1.1
[ "Negative", "Neutral", "Positive" ]
--- language: - en tags: - aspect-based-sentiment-analysis - PyABSA license: mit datasets: - laptop14 - restaurant14 - restaurant16 - ACL-Twitter - MAMS - Television - TShirt - Yelp metrics: - accuracy - macro-f1 widget: - text: "[CLS] when tables opened up, the manager sat another party before us. [SEP] manager [SEP] " --- # Note This model is training with 30k+ ABSA samples, see [ABSADatasets](https://github.com/yangheng95/ABSADatasets). Yet the test sets are not included in pre-training, so you can use this model for training and benchmarking on common ABSA datasets, e.g., Laptop14, Rest14 datasets. (Except for the Rest15 dataset!) # DeBERTa for aspect-based sentiment analysis The `deberta-v3-large-absa` model for aspect-based sentiment analysis, trained with English datasets from [ABSADatasets](https://github.com/yangheng95/ABSADatasets). ## Training Model This model is trained based on the FAST-LCF-BERT model with `microsoft/deberta-v3-large`, which comes from [PyABSA](https://github.com/yangheng95/PyABSA). To track state-of-the-art models, please see [PyASBA](https://github.com/yangheng95/PyABSA). ## Usage ```python3 from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-large-absa-v1.1") model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-large-absa-v1.1") ``` ## Example in PyASBA An [example](https://github.com/yangheng95/PyABSA/blob/release/demos/aspect_polarity_classification/train_apc_multilingual.py) for using FAST-LCF-BERT in PyASBA datasets. ## Datasets This model is fine-tuned with 180k examples for the ABSA dataset (including augmented data). Training dataset files: ``` loading: integrated_datasets/apc_datasets/SemEval/laptop14/Laptops_Train.xml.seg loading: integrated_datasets/apc_datasets/SemEval/restaurant14/Restaurants_Train.xml.seg loading: integrated_datasets/apc_datasets/SemEval/restaurant16/restaurant_train.raw loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/train.raw loading: integrated_datasets/apc_datasets/MAMS/train.xml.dat loading: integrated_datasets/apc_datasets/Television/Television_Train.xml.seg loading: integrated_datasets/apc_datasets/TShirt/Menstshirt_Train.xml.seg loading: integrated_datasets/apc_datasets/Yelp/yelp.train.txt ``` If you use this model in your research, please cite our paper: ``` @article{YangZMT21, author = {Heng Yang and Biqing Zeng and Mayi Xu and Tianxing Wang}, title = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable Sentiment Dependency Learning}, journal = {CoRR}, volume = {abs/2110.08604}, year = {2021}, url = {https://arxiv.org/abs/2110.08604}, eprinttype = {arXiv}, eprint = {2110.08604}, timestamp = {Fri, 22 Oct 2021 13:33:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2110-08604.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3,146