index
int64
0
22.3k
modelId
stringlengths
8
111
label
list
readme
stringlengths
0
385k
599
abhishek/autonlp-imdb_eval-71421
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - abhishek/autonlp-data-imdb_eval --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 71421 ## Validation Metrics - Loss: 0.4114699363708496 - Accuracy: 0.8248248248248248 - Precision: 0.8305439330543933 - Recall: 0.8085539714867617 - AUC: 0.9088033420466026 - F1: 0.8194014447884417 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb_eval-71421 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb_eval-71421", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb_eval-71421", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
600
abhishek/autonlp-imdb_sentiment_classification-31154
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 31154 ## Validation Metrics - Loss: 0.19292379915714264 - Accuracy: 0.9395 - Precision: 0.9569557080474111 - Recall: 0.9204 - AUC: 0.9851040399999998 - F1: 0.9383219492302988 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb_sentiment_classification-31154 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb_sentiment_classification-31154", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb_sentiment_classification-31154", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
601
abhishek/autonlp-japanese-sentiment-59362
[ "negative", "positive" ]
--- tags: autonlp language: ja widget: - text: "I love AutoNLP 🤗" datasets: - abhishek/autonlp-data-japanese-sentiment --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 59362 ## Validation Metrics - Loss: 0.13092292845249176 - Accuracy: 0.9527127414314258 - Precision: 0.9634070704982427 - Recall: 0.9842171959602166 - AUC: 0.9667289746092403 - F1: 0.9737009564152002 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-japanese-sentiment-59362 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-japanese-sentiment-59362", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-japanese-sentiment-59362", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
602
abhishek/autonlp-japanese-sentiment-59363
[ "negative", "positive" ]
--- tags: autonlp language: ja widget: - text: "🤗AutoNLPが大好きです" datasets: - abhishek/autonlp-data-japanese-sentiment --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 59363 ## Validation Metrics - Loss: 0.12651239335536957 - Accuracy: 0.9532079853817648 - Precision: 0.9729688278823665 - Recall: 0.9744633462616643 - AUC: 0.9717333684823413 - F1: 0.9737155136027014 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-japanese-sentiment-59363 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-japanese-sentiment-59363", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-japanese-sentiment-59363", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
603
abhishek/autonlp-toxic-new-30516963
[ "False", "True" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - abhishek/autonlp-data-toxic-new co2_eq_emissions: 30.684995819386277 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 30516963 - CO2 Emissions (in grams): 30.684995819386277 ## Validation Metrics - Loss: 0.08340361714363098 - Accuracy: 0.9688222161294113 - Precision: 0.9102096627164995 - Recall: 0.7692604006163328 - AUC: 0.9859340458715813 - F1: 0.8338204592901879 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-toxic-new-30516963 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-toxic-new-30516963", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-toxic-new-30516963", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
604
adam-chell/tweet-sentiment-analyzer
[ "NEG", "NEU", "POS" ]
This model has been trained by fine-tuning a BERTweet sentiment classification model named "finiteautomata/bertweet-base-sentiment-analysis", on a labeled positive/negative dataset of tweets. email : adam.chellaoui@epfl.ch
605
adamlin/filter
[ "LABEL_0" ]
--- language: - en tags: - generated_from_trainer datasets: - glue model_index: - name: filter results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # filter This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the GLUE STSB dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 13 - gradient_accumulation_steps: 2 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.9.0 - Tokenizers 0.10.3
606
addy88/perceiver_imdb
[ "neg", "pos" ]
### How to use Here is how to use this model in PyTorch: ```python from transformers import PerceiverTokenizer, PerceiverForMaskedLM tokenizer = PerceiverTokenizer.from_pretrained("addy88/perceiver_imdb") model = PerceiverForMaskedLM.from_pretrained("addy88/perceiver_imdb") text = "This is an incomplete sentence where some words are missing." # prepare input encoding = tokenizer(text, padding="max_length", return_tensors="pt") # mask " missing.". Note that the model performs much better if the masked span starts with a space. encoding.input_ids[0, 52:61] = tokenizer.mask_token_id inputs, input_mask = encoding.input_ids.to(device), encoding.attention_mask.to(device) # forward pass outputs = model(inputs=inputs, attention_mask=input_mask) logits = outputs.logits masked_tokens_predictions = logits[0, 51:61].argmax(dim=-1) print(tokenizer.decode(masked_tokens_predictions)) >>> should print " missing." ```
607
addy88/programming-lang-identifier
[ "go", "java", "javascript", "php", "python", "ruby" ]
This model is funetune version of Codebert in roberta. On CodeSearchNet. ### Quick start: from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("addy88/programming-lang-identifier") model = AutoModelForSequenceClassification.from_pretrained("addy88/programming-lang-identifier") input_ids = tokenizer.encode(CODE_TO_IDENTIFY) logits = model(input_ids)[0] language_idx = logits.argmax() # index for the resulting label ###
608
adelgasmi/autonlp-kpmg_nlp-18833547
[ "0", "1", "2", "3", "4" ]
--- tags: autonlp language: ar widget: - text: "I love AutoNLP 🤗" datasets: - adelgasmi/autonlp-data-kpmg_nlp co2_eq_emissions: 64.58945483765274 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 18833547 - CO2 Emissions (in grams): 64.58945483765274 ## Validation Metrics - Loss: 0.14247722923755646 - Accuracy: 0.9586074193404036 - Macro F1: 0.9468339778730883 - Micro F1: 0.9586074193404036 - Weighted F1: 0.9585551117678807 - Macro Precision: 0.9445436604001405 - Micro Precision: 0.9586074193404036 - Weighted Precision: 0.9591405429662925 - Macro Recall: 0.9499427161888565 - Micro Recall: 0.9586074193404036 - Weighted Recall: 0.9586074193404036 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/adelgasmi/autonlp-kpmg_nlp-18833547 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("adelgasmi/autonlp-kpmg_nlp-18833547", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("adelgasmi/autonlp-kpmg_nlp-18833547", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
610
Jackett/subject_classifier
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
Label association {'Biology': 0, 'Physics': 1, 'Chemistry': 2, 'Maths': 3}
611
adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717
[ "Dance", "Heavy Metal", "Hip Hop", "Indie", "Pop", "Rock" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - adrianmoses/autonlp-data-auto-nlp-lyrics-classification co2_eq_emissions: 88.89388195672073 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 19333717 - CO2 Emissions (in grams): 88.89388195672073 ## Validation Metrics - Loss: 1.0499154329299927 - Accuracy: 0.6207088513638894 - Macro F1: 0.46250803661544765 - Micro F1: 0.6207088513638894 - Weighted F1: 0.5850362079928957 - Macro Precision: 0.6451479987704787 - Micro Precision: 0.6207088513638894 - Weighted Precision: 0.6285080101186085 - Macro Recall: 0.4405680478429344 - Micro Recall: 0.6207088513638894 - Weighted Recall: 0.6207088513638894 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
613
ahmedrachid/FinancialBERT-Sentiment-Analysis
[ "negative", "neutral", "positive" ]
--- language: en tags: - financial-sentiment-analysis - sentiment-analysis datasets: - financial_phrasebank widget: - text: Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales. - text: Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000. - text: Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008. --- ### FinancialBERT for Sentiment Analysis [*FinancialBERT*](https://huggingface.co/ahmedrachid/FinancialBERT) is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model. The model was fine-tuned for Sentiment Analysis task on _Financial PhraseBank_ dataset. Experiments show that this model outperforms the general BERT and other financial domain-specific models. More details on `FinancialBERT`'s pre-training process can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining ### Training data FinancialBERT model was fine-tuned on [Financial PhraseBank](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive). ### Fine-tuning hyper-parameters - learning_rate = 2e-5 - batch_size = 32 - max_seq_length = 512 - num_train_epochs = 5 ### Evaluation metrics The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set. | sentiment | precision | recall | f1-score | support | | ------------- |:-------------:|:-------------:|:-------------:| -----:| | negative | 0.96 | 0.97 | 0.97 | 58 | | neutral | 0.98 | 0.99 | 0.98 | 279 | | positive | 0.98 | 0.97 | 0.97 | 148 | | macro avg | 0.97 | 0.98 | 0.98 | 485 | | weighted avg | 0.98 | 0.98 | 0.98 | 485 | ### How to use The model can be used thanks to Transformers pipeline for sentiment analysis. ```python from transformers import BertTokenizer, BertForSequenceClassification from transformers import pipeline model = BertForSequenceClassification.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis",num_labels=3) tokenizer = BertTokenizer.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis") nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) sentences = ["Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales.", "Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.", "Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.", ] results = nlp(sentences) print(results) [{'label': 'positive', 'score': 0.9998133778572083}, {'label': 'neutral', 'score': 0.9997822642326355}, {'label': 'negative', 'score': 0.9877365231513977}] ``` > Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
614
ainize/klue-bert-base-re
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28", "LABEL_29", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
# bert-base for KLUE Relation Extraction task. Fine-tuned klue/bert-base using KLUE RE dataset. - <a href="https://klue-benchmark.com/">KLUE Benchmark Official Webpage</a> - <a href="https://github.com/KLUE-benchmark/KLUE">KLUE Official Github</a> - <a href="https://github.com/ainize-team/klue-re-workspace">KLUE RE Github</a> - Run KLUE RE on free GPU : <a href="https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ainize-team/klue-re-workspace">Ainize Workspace</a> <br> # Usage <pre><code> from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("ainize/klue-bert-base-re") model = AutoModelForSequenceClassification.from_pretrained("ainize/klue-bert-base-re") # Add "&ltsubj&gt", "&lt/subj&gt" to both ends of the subject object and "&ltobj&gt", "&lt/obj&gt" to both ends of the object object. sentence = "&ltsubj&gt손흥민&lt/subj&gt은 &ltobj&gt대한민국&lt/obj&gt에서 태어났다." encodings = tokenizer(sentence, max_length=128, truncation=True, padding="max_length", return_tensors="pt") outputs = model(**encodings) logits = outputs['logits'] preds = torch.argmax(logits, dim=1) </code></pre> <br> # About us - <a href="https://ainize.ai/teachable-nlp">Teachable NLP</a> - Train NLP models with your own text without writing any code - <a href="https://ainize.ai/">Ainize</a> - Deploy ML project using free gpu
617
akahana/indonesia-emotion-roberta
[ "SEDIH", "MARAH", "CINTA", "TAKUT", "BAHAGIA" ]
--- language: "id" widget: - text: "dia orang yang baik ya bunds." --- ## how to use ```python from transformers import pipeline, set_seed path = "akahana/indonesia-emotion-roberta" emotion = pipeline('text-classification', model=path,device=0) set_seed(42) kalimat = "dia orang yang baik ya bunds." preds = emotion(kalimat) preds [{'label': 'BAHAGIA', 'score': 0.8790940046310425}] ```
618
akahana/indonesia-sentiment-roberta
[ "POSITIF", "NETRAL", "NEGATIF" ]
--- language: "id" widget: - text: "dia orang yang baik ya bunds." --- ## how to use ```python from transformers import pipeline, set_seed path = "akahana/indonesia-sentiment-roberta" emotion = pipeline('text-classification', model=path,device=0) set_seed(42) kalimat = "dia orang yang baik ya bunds." preds = emotion(kalimat) preds ```
619
akdeniz27/bert-turkish-text-classification
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8" ]
--- language: tr --- # Turkish Text Classification for Complaints Data Set This model is a fine-tune model of https://github.com/stefan-it/turkish-bert by using text classification data with 9 categories as follows: id_to_category = {0: 'KONFORSUZLUK', 1: 'TARİFE İHLALİ', 2: 'DURAKTA DURMAMA', 3: 'ŞOFÖR-PERSONEL ŞİKAYETİ', 4: 'YENİ GÜZERGAH/HAT/DURAK İSTEĞİ', 5: 'TRAFİK GÜVENLİĞİ', 6: 'DİĞER ŞİKAYETLER', 7: 'TEŞEKKÜR', 8: 'DİĞER TALEPLER'}
620
akhooli/xlm-r-large-arabic-sent
[ "LABEL_0_mixed", "LABEL_1_neg", "LABEL_2_pos" ]
--- language: - ar - en - multilingual license: mit --- ### xlm-r-large-arabic-sent Multilingual sentiment classification (Label_0: mixed, Label_1: negative, Label_2: positive) of Arabic reviews by fine-tuning XLM-Roberta-Large. Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Mixed category is not accurate and may confuse other classes (was based on a rate of 3 out of 5 in reviews). Usage: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
621
akhooli/xlm-r-large-arabic-toxic
[ "LABEL_0_negative", "LABEL_1_positive" ]
--- language: - ar - en license: mit --- ### xlm-r-large-arabic-toxic (toxic/hate speech classifier) Toxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large. Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Usage and further info: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
622
akilesh96/autonlp-mrcooper_text_classification-529614927
[ "Animals", "Compliment", "Education", "Health", "Heavy Emotion", "Joke", "Love", "Politics", "Religion", "Science", "Self" ]
--- tags: autonlp language: en widget: - text: "Not Many People Know About The City 1200 Feet Below Detroit" - text: "Bob accepts the challenge, and the next week they're standing in Saint Peters square. 'This isnt gonna work, he's never going to see me here when theres this much people. You stay here, I'll go talk to him and you'll see me on the balcony, the guards know me too.' Half an hour later, Bob and the pope appear side by side on the balcony. Bobs boss gets a heart attack, and Bob goes to visit him in the hospital." - text: "I’m sorry if you made it this far, but I’m just genuinely idk, I feel like I shouldn’t give up, it’s just getting harder to come back from stuff like this." datasets: - akilesh96/autonlp-data-mrcooper_text_classification co2_eq_emissions: 5.999771405025692 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 529614927 - CO2 Emissions (in grams): 5.999771405025692 ## Validation Metrics - Loss: 0.7582379579544067 - Accuracy: 0.7636103151862464 - Macro F1: 0.770630619486531 - Micro F1: 0.7636103151862464 - Weighted F1: 0.765233270165301 - Macro Precision: 0.7746285216467107 - Micro Precision: 0.7636103151862464 - Weighted Precision: 0.7683270753840836 - Macro Recall: 0.7680576576961138 - Micro Recall: 0.7636103151862464 - Weighted Recall: 0.7636103151862464 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/akilesh96/autonlp-mrcooper_text_classification-529614927 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
623
akshara23/distilbert-base-uncased-finetuned-cola
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model_index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification metric: name: Matthews Correlation type: matthews_correlation value: 0.6290322580645161 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 1.0475 - Matthews Correlation: 0.6290 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 16 | 1.3863 | 0.0 | | No log | 2.0 | 32 | 1.2695 | 0.4503 | | No log | 3.0 | 48 | 1.1563 | 0.6110 | | No log | 4.0 | 64 | 1.0757 | 0.6290 | | No log | 5.0 | 80 | 1.0475 | 0.6290 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
624
albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135
[ "0", "1", "2", "3", "4", "5" ]
--- tags: autonlp language: bn widget: - text: "I love AutoNLP 🤗" datasets: - albertvillanova/autonlp-data-indic_glue-multi_class_classification-1e67664 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 1311135 ## Validation Metrics - Loss: 0.35616958141326904 - Accuracy: 0.8979447200566973 - Macro F1: 0.8545383956197669 - Micro F1: 0.8979447200566975 - Weighted F1: 0.8983951947775538 - Macro Precision: 0.8615833774439791 - Micro Precision: 0.8979447200566973 - Weighted Precision: 0.9013559365881655 - Macro Recall: 0.8516503001777104 - Micro Recall: 0.8979447200566973 - Weighted Recall: 0.8979447200566973 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
625
alecmullen/autonlp-group-classification-441411446
[ "Beauty", "Business/Finance", "Faith", "Fitness", "Food", "Gaming", "Local", "Marketplace", "Memes", "Music", "None", "Social", "Sports", "TV/Movies", "Travel" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - alecmullen/autonlp-data-group-classification co2_eq_emissions: 0.4362732160754736 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 441411446 - CO2 Emissions (in grams): 0.4362732160754736 ## Validation Metrics - Loss: 0.7598486542701721 - Accuracy: 0.8222222222222222 - Macro F1: 0.2912091747693842 - Micro F1: 0.8222222222222222 - Weighted F1: 0.7707160863181806 - Macro Precision: 0.29631463146314635 - Micro Precision: 0.8222222222222222 - Weighted Precision: 0.7341339689524508 - Macro Recall: 0.30174603174603176 - Micro Recall: 0.8222222222222222 - Weighted Recall: 0.8222222222222222 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alecmullen/autonlp-group-classification-441411446 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
627
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
628
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4064 - Accuracy: 0.8289 - F1: 0.8901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4163 | 0.8085 | 0.8780 | | No log | 2.0 | 390 | 0.4098 | 0.8268 | 0.8878 | | 0.312 | 3.0 | 585 | 0.5892 | 0.8244 | 0.8861 | | 0.312 | 4.0 | 780 | 0.7580 | 0.8232 | 0.8845 | | 0.312 | 5.0 | 975 | 0.9028 | 0.8183 | 0.8824 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
629
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3825 - Accuracy: 0.8144 - F1: 0.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3975 | 0.8122 | 0.8795 | | No log | 2.0 | 390 | 0.4376 | 0.8085 | 0.8673 | | 0.3169 | 3.0 | 585 | 0.5736 | 0.8171 | 0.8790 | | 0.3169 | 4.0 | 780 | 0.8178 | 0.8098 | 0.8754 | | 0.3169 | 5.0 | 975 | 0.9244 | 0.8073 | 0.8738 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
630
ali2066/finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0926 - Accuracy: 0.9772 - F1: 0.9883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0539 | 0.9885 | 0.9942 | | No log | 2.0 | 208 | 0.0282 | 0.9885 | 0.9942 | | No log | 3.0 | 312 | 0.0317 | 0.9914 | 0.9956 | | No log | 4.0 | 416 | 0.0462 | 0.9885 | 0.9942 | | 0.0409 | 5.0 | 520 | 0.0517 | 0.9885 | 0.9942 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
631
ali2066/finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3358 - Accuracy: 0.8688 - F1: 0.9225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4116 | 0.8382 | 0.9027 | | No log | 2.0 | 162 | 0.4360 | 0.8382 | 0.8952 | | No log | 3.0 | 243 | 0.5719 | 0.8382 | 0.8995 | | No log | 4.0 | 324 | 0.7251 | 0.8493 | 0.9021 | | No log | 5.0 | 405 | 0.8384 | 0.8456 | 0.9019 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
632
ali2066/finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5777 - Accuracy: 0.6794 - F1: 0.5010 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.6059 | 0.63 | 0.4932 | | No log | 2.0 | 96 | 0.6327 | 0.705 | 0.5630 | | No log | 3.0 | 144 | 0.7003 | 0.695 | 0.5197 | | No log | 4.0 | 192 | 0.9368 | 0.69 | 0.4655 | | No log | 5.0 | 240 | 1.1935 | 0.685 | 0.4425 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
633
ali2066/finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4787 - Accuracy: 0.8138 - F1: 0.8785 - Precision: 0.8489 - Recall: 0.9101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.4335 | 0.7732 | 0.8533 | 0.8209 | 0.8883 | | 0.5141 | 2.0 | 780 | 0.4196 | 0.8037 | 0.8721 | 0.8446 | 0.9015 | | 0.3368 | 3.0 | 1170 | 0.4519 | 0.8098 | 0.8779 | 0.8386 | 0.9212 | | 0.2677 | 4.0 | 1560 | 0.4787 | 0.8122 | 0.8785 | 0.8452 | 0.9146 | | 0.2677 | 5.0 | 1950 | 0.4912 | 0.8146 | 0.8794 | 0.8510 | 0.9097 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
634
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51
[ "NEGATIVE", "POSITIVE" ]
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51 This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4563 - Accuracy: 0.8440 - F1: 0.8954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4302 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3970 | 0.8220 | 0.8875 | | 0.3703 | 3.0 | 585 | 0.3972 | 0.8402 | 0.8934 | | 0.3703 | 4.0 | 780 | 0.4945 | 0.8390 | 0.8935 | | 0.3703 | 5.0 | 975 | 0.5354 | 0.8305 | 0.8898 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
635
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4208 - Accuracy: 0.8283 - F1: 0.8915 - Precision: 0.8487 - Recall: 0.9389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.4443 | 0.7768 | 0.8589 | 0.8072 | 0.9176 | | 0.4532 | 2.0 | 780 | 0.4603 | 0.8098 | 0.8791 | 0.8302 | 0.9341 | | 0.2608 | 3.0 | 1170 | 0.5284 | 0.8061 | 0.8713 | 0.8567 | 0.8863 | | 0.1577 | 4.0 | 1560 | 0.6398 | 0.8085 | 0.8749 | 0.8472 | 0.9044 | | 0.1577 | 5.0 | 1950 | 0.7089 | 0.8085 | 0.8741 | 0.8516 | 0.8979 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
636
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6168 - Accuracy: 0.8286 - F1: 0.8887 - Precision: 0.8628 - Recall: 0.9162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.3890 | 0.8110 | 0.8749 | 0.8631 | 0.8871 | | 0.4535 | 2.0 | 780 | 0.3921 | 0.8439 | 0.8984 | 0.8721 | 0.9264 | | 0.266 | 3.0 | 1170 | 0.4454 | 0.8415 | 0.8947 | 0.8860 | 0.9034 | | 0.16 | 4.0 | 1560 | 0.5610 | 0.8427 | 0.8957 | 0.8850 | 0.9067 | | 0.16 | 5.0 | 1950 | 0.6180 | 0.8488 | 0.9010 | 0.8799 | 0.9231 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
637
ali2066/finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4345 - Accuracy: 0.8321 - F1: 0.8904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3922 | 0.8061 | 0.8747 | | No log | 2.0 | 390 | 0.3764 | 0.8171 | 0.8837 | | 0.4074 | 3.0 | 585 | 0.3873 | 0.8220 | 0.8843 | | 0.4074 | 4.0 | 780 | 0.4361 | 0.8232 | 0.8854 | | 0.4074 | 5.0 | 975 | 0.4555 | 0.8159 | 0.8793 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
638
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5002 - Accuracy: 0.8103 - F1: 0.8764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4178 | 0.7963 | 0.8630 | | No log | 2.0 | 390 | 0.3935 | 0.8061 | 0.8770 | | 0.4116 | 3.0 | 585 | 0.4037 | 0.8085 | 0.8735 | | 0.4116 | 4.0 | 780 | 0.4696 | 0.8146 | 0.8796 | | 0.4116 | 5.0 | 975 | 0.4849 | 0.8207 | 0.8823 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
639
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4917 - Accuracy: 0.8231 - F1: 0.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3883 | 0.8146 | 0.8833 | | No log | 2.0 | 390 | 0.3607 | 0.8390 | 0.8964 | | 0.4085 | 3.0 | 585 | 0.3812 | 0.8488 | 0.9042 | | 0.4085 | 4.0 | 780 | 0.3977 | 0.8549 | 0.9077 | | 0.4085 | 5.0 | 975 | 0.4233 | 0.8573 | 0.9092 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
640
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4638 - Accuracy: 0.8247 - F1: 0.8867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4069 | 0.7976 | 0.875 | | No log | 2.0 | 390 | 0.4061 | 0.8134 | 0.8838 | | 0.4074 | 3.0 | 585 | 0.4075 | 0.8134 | 0.8798 | | 0.4074 | 4.0 | 780 | 0.4746 | 0.8256 | 0.8885 | | 0.4074 | 5.0 | 975 | 0.4881 | 0.8220 | 0.8845 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
641
ali2066/finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0914 - Accuracy: 0.9746 - F1: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0501 | 0.9828 | 0.9913 | | No log | 2.0 | 208 | 0.0435 | 0.9828 | 0.9913 | | No log | 3.0 | 312 | 0.0414 | 0.9828 | 0.9913 | | No log | 4.0 | 416 | 0.0424 | 0.9799 | 0.9898 | | 0.0547 | 5.0 | 520 | 0.0482 | 0.9828 | 0.9913 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
642
ali2066/finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3455 - Accuracy: 0.8609 - F1: 0.9156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4468 | 0.8235 | 0.8929 | | No log | 2.0 | 162 | 0.4497 | 0.8382 | 0.9 | | No log | 3.0 | 243 | 0.4861 | 0.8309 | 0.8940 | | No log | 4.0 | 324 | 0.5087 | 0.8235 | 0.8879 | | No log | 5.0 | 405 | 0.5228 | 0.8199 | 0.8858 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
643
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7224 - Accuracy: 0.6979 - F1: 0.4736 - Precision: 0.5074 - Recall: 0.4440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 95 | 0.6009 | 0.65 | 0.2222 | 0.625 | 0.1351 | | No log | 2.0 | 190 | 0.6140 | 0.675 | 0.3689 | 0.6552 | 0.2568 | | No log | 3.0 | 285 | 0.6580 | 0.67 | 0.4590 | 0.5833 | 0.3784 | | No log | 4.0 | 380 | 0.7560 | 0.665 | 0.4806 | 0.5636 | 0.4189 | | No log | 5.0 | 475 | 0.8226 | 0.665 | 0.464 | 0.5686 | 0.3919 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
644
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
645
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5819 - Accuracy: 0.7058 - F1: 0.4267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.6110 | 0.665 | 0.0 | | No log | 2.0 | 96 | 0.5706 | 0.685 | 0.2588 | | No log | 3.0 | 144 | 0.5484 | 0.725 | 0.5299 | | No log | 4.0 | 192 | 0.5585 | 0.71 | 0.4727 | | No log | 5.0 | 240 | 0.5616 | 0.725 | 0.5133 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
646
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
647
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3944 - Accuracy: 0.8279 - F1: 0.8901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3946 | 0.8012 | 0.8743 | | No log | 2.0 | 390 | 0.3746 | 0.8329 | 0.8929 | | 0.3644 | 3.0 | 585 | 0.4288 | 0.8268 | 0.8849 | | 0.3644 | 4.0 | 780 | 0.5352 | 0.8232 | 0.8841 | | 0.3644 | 5.0 | 975 | 0.5768 | 0.8268 | 0.8864 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
648
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6071 - Accuracy: 0.8337 - F1: 0.8922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3920 | 0.7988 | 0.8624 | | No log | 2.0 | 390 | 0.3873 | 0.8171 | 0.8739 | | 0.3673 | 3.0 | 585 | 0.4354 | 0.8256 | 0.8835 | | 0.3673 | 4.0 | 780 | 0.5358 | 0.8293 | 0.8887 | | 0.3673 | 5.0 | 975 | 0.5616 | 0.8366 | 0.8923 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
649
ali2066/finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0890 - Accuracy: 0.9750 - F1: 0.9873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0485 | 0.9885 | 0.9942 | | No log | 2.0 | 208 | 0.0558 | 0.9857 | 0.9927 | | No log | 3.0 | 312 | 0.0501 | 0.9828 | 0.9913 | | No log | 4.0 | 416 | 0.0593 | 0.9828 | 0.9913 | | 0.04 | 5.0 | 520 | 0.0653 | 0.9828 | 0.9913 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
650
ali2066/finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3767 - Accuracy: 0.8638 - F1: 0.9165 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4489 | 0.8309 | 0.8969 | | No log | 2.0 | 162 | 0.4429 | 0.8272 | 0.8915 | | No log | 3.0 | 243 | 0.5154 | 0.8529 | 0.9083 | | No log | 4.0 | 324 | 0.5552 | 0.8309 | 0.8925 | | No log | 5.0 | 405 | 0.5896 | 0.8309 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
651
ali2066/finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6020 - Accuracy: 0.7032 - F1: 0.4851 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5914 | 0.67 | 0.0294 | | No log | 2.0 | 96 | 0.5616 | 0.695 | 0.2824 | | No log | 3.0 | 144 | 0.5596 | 0.73 | 0.5909 | | No log | 4.0 | 192 | 0.6273 | 0.73 | 0.5 | | No log | 5.0 | 240 | 0.6370 | 0.71 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
652
ali2066/finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
653
ali2066/finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
654
ali2066/finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
655
ali2066/finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
656
ali2066/finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
657
ali2066/finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
658
ali2066/finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
659
ali2066/finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
660
ali2066/finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
661
ali2066/finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
662
ali2066/finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
663
ali2066/finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
664
ali2066/finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
665
ali2066/finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
666
ali2066/finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
667
ali2066/finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
668
ali2066/finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
669
ali2066/finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
670
ali2066/finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
671
ali2066/finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
672
ali2066/finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
674
allenai/longformer-scico
[ "child", "coref", "not related", "parent" ]
--- language: en tags: - longformer - longformer-scico license: apache-2.0 datasets: - allenai/scico inference: false --- # Longformer for SciCo This model is the `unified` model discussed in the paper [SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts (AKBC 2021)](https://openreview.net/forum?id=OFLbgUP04nC) that formulates the task of hierarchical cross-document coreference resolution (H-CDCR) as a multiclass problem. The model takes as input two mentions `m1` and `m2` with their corresponding context and outputs 4 scores: * 0: not related * 1: `m1` and `m2` corefer * 2: `m1` is a parent of `m2` * 3: `m1` is a child of `m2`. We provide the following code as an example to set the global attention on the special tokens: `<s>`, `<m>` and `</m>`. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch tokenizer = AutoTokenizer.from_pretrained('allenai/longformer-scico') model = AutoModelForSequenceClassification.from_pretrained('allenai/longformer-scico') start_token = tokenizer.convert_tokens_to_ids("<m>") end_token = tokenizer.convert_tokens_to_ids("</m>") def get_global_attention(input_ids): global_attention_mask = torch.zeros(input_ids.shape) global_attention_mask[:, 0] = 1 # global attention to the CLS token start = torch.nonzero(input_ids == start_token) # global attention to the <m> token end = torch.nonzero(input_ids == end_token) # global attention to the </m> token globs = torch.cat((start, end)) value = torch.ones(globs.shape[0]) global_attention_mask.index_put_(tuple(globs.t()), value) return global_attention_mask m1 = "In this paper we present the results of an experiment in <m> automatic concept and definition extraction </m> from written sources of law using relatively simple natural methods." m2 = "This task is important since many natural language processing (NLP) problems, such as <m> information extraction </m>, summarization and dialogue." inputs = m1 + " </s></s> " + m2 tokens = tokenizer(inputs, return_tensors='pt') global_attention_mask = get_global_attention(tokens['input_ids']) with torch.no_grad(): output = model(tokens['input_ids'], tokens['attention_mask'], global_attention_mask) scores = torch.softmax(output.logits, dim=-1) # tensor([[0.0818, 0.0023, 0.0019, 0.9139]]) -- m1 is a child of m2 ``` **Note:** There is a slight difference between this model and the original model presented in the [paper](https://openreview.net/forum?id=OFLbgUP04nC). The original model includes a single linear layer on top of the `<s>` token (equivalent to `[CLS]`) while this model includes a two-layers MLP to be in line with `LongformerForSequenceClassification`. The original repository can be found [here](https://github.com/ariecattan/scico). # Citation ```python @inproceedings{ cattan2021scico, title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts}, author={Arie Cattan and Sophie Johnson and Daniel S Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope}, booktitle={3rd Conference on Automated Knowledge Base Construction}, year={2021}, url={https://openreview.net/forum?id=OFLbgUP04nC} } ```
675
alperiox/autonlp-user-review-classification-536415182
[ "CONTENT", "INTERFACE", "SUBSCRIPTION", "USER_EXPERIENCE" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - alperiox/autonlp-data-user-review-classification co2_eq_emissions: 1.268309634217171 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 536415182 - CO2 Emissions (in grams): 1.268309634217171 ## Validation Metrics - Loss: 0.44733062386512756 - Accuracy: 0.8873239436619719 - Macro F1: 0.8859416445623343 - Micro F1: 0.8873239436619719 - Weighted F1: 0.8864646766540891 - Macro Precision: 0.8848522167487685 - Micro Precision: 0.8873239436619719 - Weighted Precision: 0.8883299798792756 - Macro Recall: 0.8908045977011494 - Micro Recall: 0.8873239436619719 - Weighted Recall: 0.8873239436619719 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alperiox/autonlp-user-review-classification-536415182 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
676
alvp/autonlp-alberti-stanza-names-34318169
[ "cantar", "chamberga", "copla_arte_mayor", "copla_arte_menor", "copla_castellana", "copla_mixta", "copla_real", "couplet", "cuaderna_vía", "cuarteta", "cuarteto", "cuarteto_lira", "décima_antigua", "endecha_real", "espinela", "estrofa_francisco_de_la_torre", "estrofa_manriqueña", "estrofa_sáfica", "haiku", "lira", "novena", "octava", "octava_real", "octavilla", "ovillejo", "quinteto", "quintilla", "redondilla", "romance", "romance_arte_mayor", "seguidilla", "seguidilla_compuesta", "seguidilla_gitana", "septeto", "septilla", "serventesio", "sexta_rima", "sexteto", "sexteto_lira", "sextilla", "silva_arromanzada", "soleá", "tercetillo", "terceto", "terceto_monorrimo", "unknown" ]
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - alvp/autonlp-data-alberti-stanza-names co2_eq_emissions: 8.612473981829835 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 34318169 - CO2 Emissions (in grams): 8.612473981829835 ## Validation Metrics - Loss: 1.3520570993423462 - Accuracy: 0.6083916083916084 - Macro F1: 0.5420169617715481 - Micro F1: 0.6083916083916084 - Weighted F1: 0.5963328136975058 - Macro Precision: 0.5864033493660455 - Micro Precision: 0.6083916083916084 - Weighted Precision: 0.6364793882921277 - Macro Recall: 0.5545405576555766 - Micro Recall: 0.6083916083916084 - Weighted Recall: 0.6083916083916084 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alvp/autonlp-alberti-stanza-names-34318169 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("alvp/autonlp-alberti-stanza-names-34318169", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("alvp/autonlp-alberti-stanza-names-34318169", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
677
am4nsolanki/autonlp-text-hateful-memes-36789092
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - am4nsolanki/autonlp-data-text-hateful-memes co2_eq_emissions: 1.4280361775467445 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 36789092 - CO2 Emissions (in grams): 1.4280361775467445 ## Validation Metrics - Loss: 0.5255328416824341 - Accuracy: 0.7666078777189889 - Precision: 0.6913123844731978 - Recall: 0.6192052980132451 - AUC: 0.7893359070795125 - F1: 0.6532751091703057 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/am4nsolanki/autonlp-text-hateful-memes-36789092 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("am4nsolanki/autonlp-text-hateful-memes-36789092", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("am4nsolanki/autonlp-text-hateful-memes-36789092", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
678
amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061
[ "negative", "neutral", "positive" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - amansolanki/autonlp-data-Tweet-Sentiment-Extraction co2_eq_emissions: 3.651199395353127 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 20114061 - CO2 Emissions (in grams): 3.651199395353127 ## Validation Metrics - Loss: 0.5046541690826416 - Accuracy: 0.8036219581211093 - Macro F1: 0.807095210403678 - Micro F1: 0.8036219581211093 - Weighted F1: 0.8039634739225368 - Macro Precision: 0.8076842795233988 - Micro Precision: 0.8036219581211093 - Weighted Precision: 0.8052135235094771 - Macro Recall: 0.8075241470527056 - Micro Recall: 0.8036219581211093 - Weighted Recall: 0.8036219581211093 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
679
amazon-sagemaker-community/xlm-roberta-en-ru-emoji-v2
[ "☀", "☹️", "✨", "❤", "🇺🇸", "🎄", "💕", "💙", "💜", "💢", "💯", "📷", "📸", "🔥", "😁", "😂", "😉", "😊", "😍", "😎", "😔", "😘", "😜", "😠", "😡", "😤", "😩", "😭", "😳", "🙃", "🙄", "🙈" ]
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlm-roberta-en-ru-emoji-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-en-ru-emoji-v2 This model is a fine-tuned version of [DeepPavlov/xlm-roberta-large-en-ru](https://huggingface.co/DeepPavlov/xlm-roberta-large-en-ru) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3356 - Accuracy: 0.3102 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.4 | 200 | 3.0592 | 0.1204 | | No log | 0.81 | 400 | 2.5356 | 0.2480 | | 2.6294 | 1.21 | 600 | 2.4570 | 0.2569 | | 2.6294 | 1.62 | 800 | 2.3332 | 0.2832 | | 1.9286 | 2.02 | 1000 | 2.3354 | 0.2803 | | 1.9286 | 2.42 | 1200 | 2.3610 | 0.2881 | | 1.9286 | 2.83 | 1400 | 2.3004 | 0.2973 | | 1.7312 | 3.23 | 1600 | 2.3619 | 0.3026 | | 1.7312 | 3.64 | 1800 | 2.3596 | 0.3032 | | 1.5816 | 4.04 | 2000 | 2.2972 | 0.3072 | | 1.5816 | 4.44 | 2200 | 2.3077 | 0.3073 | | 1.5816 | 4.85 | 2400 | 2.3356 | 0.3102 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
681
amirhossein1376/pft-clf-finetuned
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
--- license: apache-2.0 language: fa widget: - text: "امروز دربی دو تیم پرسپولیس و استقلال در ورزشگاه آزادی تهران برگزار می‌شود." - text: "وزیر امور خارجه اردن تاکید کرد که همه کشورهای عربی خواهان روابط خوب با ایران هستند. به گزارش ایسنا به نقل از شبکه فرانس ۲۴، ایمن الصفدی معاون نخست‌وزیر و وزیر امور خارجه اردن پس از کنفرانس لیبی در پاریس در گفت‌وگویی با فرانس ۲۴ تاکید کرد: موضع اردن روشن است، ما خواستار روابط منطقه‌ای مبتنی بر حسن همجواری و عدم مداخله در امور داخلی هستیم. بسیاری از مسائل و مشکلات منطقه نیاز به رسیدگی از طریق گفت‌وگو دارد. الصفدی هرگونه گفت‌وگوی با واسطه اردن با ایران را رد کرده و گفت: ما با نمایندگان هیچ‌کس صحبت نمی‌کنیم و زمانی که با ایران صحبت می‌کنیم مستقیماً با دولت این کشور بوده و از طریق تماس تلفنی وزیر امور خارجه دو کشور. وی تاکید کرد: همه در منطقه عربی خواستار روابط خوب با ایران هستند، اما برای تحقق این امر باید روابط بر اساس شفافیت و بر اساس اصول احترام به همسایگی و عدم مداخله در امور داخلی باشد. " tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: pft-clf-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pft-clf-finetuned This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on an "FarsNews1398" dataset. This dataset contains a collection of news that has been gathered from the farsnews website which is a news agency in Iran. You can download the dataset from [here](https://www.kaggle.com/amirhossein76/farsnews1398). I used category, abstract, and paragraphs of news for doing text classification. "abstract" and "paragraphs" for each news concatenated together and "category" used as a target for classification. The notebook used for fine-tuning can be found [here](https://colab.research.google.com/drive/1jC2dfKRASxCY-b6bJSPkhEJfQkOA30O0?usp=sharing). I've reported loss and Matthews correlation criteria on the validation set. It achieves the following results on the evaluation set: - Loss: 0.0617 - Matthews Correlation: 0.9830 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 6 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:-----:|:---------------:|:--------------------:| | 0.0634 | 1.0 | 20276 | 0.0617 | 0.9830 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
682
andi611/distilbert-base-uncased-ner-agnews
[ "Business", "Sci/Tech", "Sports", "World" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - ag_news metrics: - accuracy model_index: - name: distilbert-base-uncased-agnews results: - dataset: name: ag_news type: ag_news args: default metric: name: Accuracy type: accuracy value: 0.9473684210526315 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-agnews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset. It achieves the following results on the evaluation set: - Loss: 0.1652 - Accuracy: 0.9474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1916 | 1.0 | 3375 | 0.1741 | 0.9412 | | 0.123 | 2.0 | 6750 | 0.1631 | 0.9483 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
683
andi611/distilbert-base-uncased-qa-boolq
[ "False", "True" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - boolq metrics: - accuracy model_index: - name: distilbert-base-uncased-boolq results: - task: name: Question Answering type: question-answering dataset: name: boolq type: boolq args: default metric: name: Accuracy type: accuracy value: 0.7314984709480122 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-boolq This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the boolq dataset. It achieves the following results on the evaluation set: - Loss: 1.2071 - Accuracy: 0.7315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6506 | 1.0 | 531 | 0.6075 | 0.6681 | | 0.575 | 2.0 | 1062 | 0.5816 | 0.6978 | | 0.4397 | 3.0 | 1593 | 0.6137 | 0.7253 | | 0.2524 | 4.0 | 2124 | 0.8124 | 0.7466 | | 0.126 | 5.0 | 2655 | 1.1437 | 0.7370 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
684
anditya/xlm-roberta-base-finetuned-marc-en
[ "good", "great", "ok", "poor", "terrible" ]
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8885 - Mae: 0.4390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1089 | 1.0 | 235 | 0.9027 | 0.4756 | | 0.9674 | 2.0 | 470 | 0.8885 | 0.4390 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
687
anel/autonlp-cml-412010597
[ "misleading", "news" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - anel/autonlp-data-cml co2_eq_emissions: 10.411685187181709 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 412010597 - CO2 Emissions (in grams): 10.411685187181709 ## Validation Metrics - Loss: 0.12585781514644623 - Accuracy: 0.9475446428571429 - Precision: 0.9454660748256183 - Recall: 0.964424320827943 - AUC: 0.990229573862156 - F1: 0.9548511047070125 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/anel/autonlp-cml-412010597 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("anel/autonlp-cml-412010597", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("anel/autonlp-cml-412010597", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
688
anelnurkayeva/autonlp-covid-432211280
[ "misleading", "news" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - anelnurkayeva/autonlp-data-covid co2_eq_emissions: 8.898145050355591 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 432211280 - CO2 Emissions (in grams): 8.898145050355591 ## Validation Metrics - Loss: 0.12489336729049683 - Accuracy: 0.9520089285714286 - Precision: 0.9436443331246086 - Recall: 0.9747736093143596 - AUC: 0.9910066767410616 - F1: 0.958956411072224 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/anelnurkayeva/autonlp-covid-432211280 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("anelnurkayeva/autonlp-covid-432211280", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("anelnurkayeva/autonlp-covid-432211280", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
689
anindabitm/sagemaker-distilbert-emotion
[ "anger", "fear", "joy", "love", "sadness", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: sagemaker-distilbert-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9165 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sagemaker-distilbert-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2434 - Accuracy: 0.9165 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9423 | 1.0 | 500 | 0.2434 | 0.9165 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
714
ans/vaccinating-covid-tweets
[ "false", "misleading", "true" ]
--- language: en license: apache-2.0 datasets: - tweets widget: - text: "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." --- # Disclaimer: This page is under maintenance. Please DO NOT refer to the information on this page to make any decision yet. # Vaccinating COVID tweets A fine-tuned model for fact-classification task on English tweets about COVID-19/vaccine. ## Intended uses & limitations You can classify if the input tweet (or any others statement) about COVID-19/vaccine is `true`, `false` or `misleading`. Note that since this model was trained with data up to May 2020, the most recent information may not be reflected. #### How to use You can use this model directly on this page or using `transformers` in python. - Load pipeline and implement with input sequence ```python from transformers import pipeline pipe = pipeline("sentiment-analysis", model = "ans/vaccinating-covid-tweets") seq = "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." pipe(seq) ``` - Expected output ```python [ { "label": "false", "score": 0.07972867041826248 }, { "label": "misleading", "score": 0.019911376759409904 }, { "label": "true", "score": 0.9003599882125854 } ] ``` - `true` examples ```python "By the end of 2020, several vaccines had become available for use in different parts of the world." "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." "RNA vaccines were the first vaccines for SARS-CoV-2 to be produced and represent an entirely new vaccine approach." ``` - `false` examples ```python "COVID-19 vaccine caused new strain in UK." ``` #### Limitations and bias To conservatively classify whether an input sequence is true or not, the model may have predictions biased toward `false` or `misleading`. ## Training data & Procedure #### Pre-trained baseline model - Pre-trained model: [BERTweet](https://github.com/VinAIResearch/BERTweet) - trained based on the RoBERTa pre-training procedure - 850M General English Tweets (Jan 2012 to Aug 2019) - 23M COVID-19 English Tweets - Size of the model: >134M parameters - Further training - Pre-training with recent COVID-19/vaccine tweets and fine-tuning for fact classification #### 1) Pre-training language model - The model was pre-trained on COVID-19/vaccined related tweets using a masked language modeling (MLM) objective starting from BERTweet. - Following datasets on English tweets were used: - Tweets with trending #CovidVaccine hashtag, 207,000 tweets uploaded across Aug 2020 to Apr 2021 ([kaggle](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets)) - Tweets about all COVID-19 vaccines, 78,000 tweets uploaded across Dec 2020 to May 2021 ([kaggle](https://www.kaggle.com/gpreda/all-covid19-vaccines-tweets)) - COVID-19 Twitter chatter dataset, 590,000 tweets uploaded across Mar 2021 to May 2021 ([github](https://github.com/thepanacealab/covid19_twitter)) #### 2) Fine-tuning for fact classification - A fine-tuned model from pre-trained language model (1) for fact-classification task on COVID-19/vaccine. - COVID-19/vaccine-related statements were collected from [Poynter](https://www.poynter.org/ifcn-covid-19-misinformation/) and [Snopes](https://www.snopes.com/) using Selenium resulting in over 14,000 fact-checked statements from Jan 2020 to May 2021. - Original labels were divided within following three categories: - `False`: includes false, no evidence, manipulated, fake, not true, unproven and unverified - `Misleading`: includes misleading, exaggerated, out of context and needs context - `True`: includes true and correct ## Evaluation results | Training loss | Validation loss | Training accuracy | Validation accuracy | | --- | --- | --- | --- | | 0.1062 | 0.1006 | 96.3% | 94.5% | # Contributors - This model is a part of final team project from MLDL for DS class at SNU. - Team BIBI - Vaccinating COVID-NineTweets - Team members: Ahn, Hyunju; An, Jiyong; An, Seungchan; Jeong, Seokho; Kim, Jungmin; Kim, Sangbeom - Advisor: Prof. Wen-Syan Li <a href="https://gsds.snu.ac.kr/"><img src="https://gsds.snu.ac.kr/wp-content/uploads/sites/50/2021/04/GSDS_logo2-e1619068952717.png" width="200" height="80"></a>
716
citizenlab/distilbert-base-multilingual-cased-toxicity
[ "not_toxic", "toxic" ]
--- pipeline_type: "text-classification" widget: - text: "this is a lovely message" example_title: "Example 1" multi_class: false - text: "you are an idiot and you and your family should go back to your country" example_title: "Example 2" multi_class: false language: - en - nl - fr - pt - it - es - de - da - pl - af datasets: - jigsaw_toxicity_pred metrics: - F1 Accuracy --- # citizenlab/distilbert-base-multilingual-cased-toxicity This is multilingual Distil-Bert model sequence classifier trained based on [JIGSAW Toxic Comment Classification Challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) dataset. ## How to use it ```python from transformers import pipeline model_path = "citizenlab/distilbert-base-multilingual-cased-toxicity" toxicity_classifier = pipeline("text-classification", model=model_path, tokenizer=model_path) toxicity_classifier("this is a lovely message") > [{'label': 'not_toxic', 'score': 0.9954179525375366}] toxicity_classifier("you are an idiot and you and your family should go back to your country") > [{'label': 'toxic', 'score': 0.9948776960372925}] ``` ## Evaluation ### Accuracy ``` Accuracy Score = 0.9425 F1 Score (Micro) = 0.9450549450549449 F1 Score (Macro) = 0.8491432341169309 ```
717
arianpasquali/distilbert-base-uncased-finetuned-clinc
[ "accept_reservations", "account_blocked", "alarm", "application_status", "apr", "are_you_a_bot", "balance", "bill_balance", "bill_due", "book_flight", "book_hotel", "calculator", "calendar", "calendar_update", "calories", "cancel", "cancel_reservation", "car_rental", "card_declined", "carry_on", "change_accent", "change_ai_name", "change_language", "change_speed", "change_user_name", "change_volume", "confirm_reservation", "cook_time", "credit_limit", "credit_limit_change", "credit_score", "current_location", "damaged_card", "date", "definition", "direct_deposit", "directions", "distance", "do_you_have_pets", "exchange_rate", "expiration_date", "find_phone", "flight_status", "flip_coin", "food_last", "freeze_account", "fun_fact", "gas", "gas_type", "goodbye", "greeting", "how_busy", "how_old_are_you", "improve_credit_score", "income", "ingredient_substitution", "ingredients_list", "insurance", "insurance_change", "interest_rate", "international_fees", "international_visa", "jump_start", "last_maintenance", "lost_luggage", "make_call", "maybe", "meal_suggestion", "meaning_of_life", "measurement_conversion", "meeting_schedule", "min_payment", "mpg", "new_card", "next_holiday", "next_song", "no", "nutrition_info", "oil_change_how", "oil_change_when", "oos", "order", "order_checks", "order_status", "pay_bill", "payday", "pin_change", "play_music", "plug_type", "pto_balance", "pto_request", "pto_request_status", "pto_used", "recipe", "redeem_rewards", "reminder", "reminder_update", "repeat", "replacement_card_duration", "report_fraud", "report_lost_card", "reset_settings", "restaurant_reservation", "restaurant_reviews", "restaurant_suggestion", "rewards_balance", "roll_dice", "rollover_401k", "routing", "schedule_maintenance", "schedule_meeting", "share_location", "shopping_list", "shopping_list_update", "smart_home", "spelling", "spending_history", "sync_device", "taxes", "tell_joke", "text", "thank_you", "time", "timer", "timezone", "tire_change", "tire_pressure", "todo_list", "todo_list_update", "traffic", "transactions", "transfer", "translate", "travel_alert", "travel_notification", "travel_suggestion", "uber", "update_playlist", "user_name", "vaccines", "w2", "weather", "what_are_your_hobbies", "what_can_i_ask_you", "what_is_your_name", "what_song", "where_are_you_from", "whisper_mode", "who_do_you_work_for", "who_made_you", "yes" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9112903225806451 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7751 - Accuracy: 0.9113 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.315 | 1.0 | 318 | 3.3087 | 0.74 | | 2.6371 | 2.0 | 636 | 1.8833 | 0.8381 | | 1.5388 | 3.0 | 954 | 1.1547 | 0.8929 | | 1.0076 | 4.0 | 1272 | 0.8590 | 0.9071 | | 0.79 | 5.0 | 1590 | 0.7751 | 0.9113 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.7.1 - Datasets 1.16.1 - Tokenizers 0.10.3
718
citizenlab/twitter-xlm-roberta-base-sentiment-finetunned
[ "Negative", "Neutral", "Positive" ]
--- pipeline_type: "text-classification" widget: - text: "this is a lovely message" example_title: "Example 1" multi_class: false - text: "you are an idiot and you and your family should go back to your country" example_title: "Example 2" multi_class: false language: - en - nl - fr - pt - it - es - de - da - pl - af datasets: - jigsaw_toxicity_pred metrics: - F1 Accuracy --- # citizenlab/twitter-xlm-roberta-base-sentiment-finetunned This is multilingual XLM-Roberta model sequence classifier fine tunned and based on [Cardiff NLP Group](cardiffnlp/twitter-roberta-base-sentiment) sentiment classification model. ## How to use it ```python from transformers import pipeline model_path = "citizenlab/twitter-xlm-roberta-base-sentiment-finetunned" sentiment_classifier = pipeline("text-classification", model=model_path, tokenizer=model_path) sentiment_classifier("this is a lovely message") > [{'label': 'Positive', 'score': 0.9918450713157654}] sentiment_classifier("you are an idiot and you and your family should go back to your country") > [{'label': 'Negative', 'score': 0.9849833846092224}] ``` ## Evaluation ``` precision recall f1-score support Negative 0.57 0.14 0.23 28 Neutral 0.78 0.94 0.86 132 Positive 0.89 0.80 0.85 51 accuracy 0.80 211 macro avg 0.75 0.63 0.64 211 weighted avg 0.78 0.80 0.77 211 ```
719
aristotletan/roberta-base-finetuned-sst2
[ "analogous event", "appointment of receiver", "assets", "breach of obligations", "cessation of business", "composition and arrangement", "creditor control", "cross default", "disposal", "event or events", "insolvency", "invalidity", "jeopardy", "judgement", "legal proceedings", "misrepresentation", "nationalisation", "non payment", "others", "repudiation", "revocation of license", "winding up" ]
--- license: mit tags: - generated_from_trainer datasets: - scim metrics: - accuracy model_index: - name: roberta-base-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: scim type: scim args: eod metric: name: Accuracy type: accuracy value: 0.9111111111111111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-sst2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the scim dataset. It achieves the following results on the evaluation set: - Loss: 0.4632 - Accuracy: 0.9111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 90 | 2.0273 | 0.6667 | | No log | 2.0 | 180 | 0.8802 | 0.8556 | | No log | 3.0 | 270 | 0.5908 | 0.8889 | | No log | 4.0 | 360 | 0.4632 | 0.9111 | | No log | 5.0 | 450 | 0.4294 | 0.9111 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
720
arjuntheprogrammer/distilbert-base-multilingual-cased-sentiment-2
[ "negative", "neutral", "positive" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy - f1 model-index: - name: distilbert-base-multilingual-cased-sentiment-2 results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: en metrics: - name: Accuracy type: accuracy value: 0.7614 - name: F1 type: f1 value: 0.7614 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-sentiment-2 This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.5882 - Accuracy: 0.7614 - F1: 0.7614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00024 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - distributed_type: sagemaker_data_parallel - num_devices: 8 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
721
arpanghoshal/EmoRoBERTa
[ "admiration", "amusement", "anger", "annoyance", "approval", "caring", "confusion", "curiosity", "desire", "disappointment", "disapproval", "disgust", "embarrassment", "excitement", "fear", "gratitude", "grief", "joy", "love", "nervousness", "neutral", "optimism", "pride", "realization", "relief", "remorse", "sadness", "surprise" ]
--- language: en tags: - text-classification - tensorflow - roberta datasets: - go_emotions license: mit --- Connect me on LinkedIn - [linkedin.com/in/arpanghoshal](https://www.linkedin.com/in/arpanghoshal) ## What is GoEmotions Dataset labelled 58000 Reddit comments with 28 emotions - admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral ## What is RoBERTa RoBERTa builds on BERT’s language masking strategy and modifies key hyperparameters in BERT, including removing BERT’s next-sentence pretraining objective, and training with much larger mini-batches and learning rates. RoBERTa was also trained on an order of magnitude more data than BERT, for a longer amount of time. This allows RoBERTa representations to generalize even better to downstream tasks compared to BERT. ## Hyperparameters | Parameter | | | ----------------- | :---: | | Learning rate | 5e-5 | | Epochs | 10 | | Max Seq Length | 50 | | Batch size | 16 | | Warmup Proportion | 0.1 | | Epsilon | 1e-8 | ## Results Best Result of `Macro F1` - 49.30% ## Usage ```python from transformers import RobertaTokenizerFast, TFRobertaForSequenceClassification, pipeline tokenizer = RobertaTokenizerFast.from_pretrained("arpanghoshal/EmoRoBERTa") model = TFRobertaForSequenceClassification.from_pretrained("arpanghoshal/EmoRoBERTa") emotion = pipeline('sentiment-analysis', model='arpanghoshal/EmoRoBERTa') emotion_labels = emotion("Thanks for using it.") print(emotion_labels) ``` Output ``` [{'label': 'gratitude', 'score': 0.9964383244514465}] ```
722
asalics/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9244145121183605 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2207 - Accuracy: 0.924 - F1: 0.9244 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7914 | 1.0 | 250 | 0.3032 | 0.905 | 0.9030 | | 0.2379 | 2.0 | 500 | 0.2207 | 0.924 | 0.9244 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
723
ashish-chouhan/xlm-roberta-base-finetuned-marc
[ "good", "great", "ok", "poor", "terrible" ]
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.0171 - Mae: 0.5310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1404 | 1.0 | 308 | 1.0720 | 0.5398 | | 0.9805 | 2.0 | 616 | 1.0171 | 0.5310 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
724
ashraq/dv-electra-small-news-classification
[ "ރާއްޖެ", "ކުޅިވަރު", "ވިޔަފާރި", "މުނިފޫހިފިލުވުން", "ދީނީ", "ދުނިޔެ", "ސިޔާސީ", "ޓެކްނޮލޮޖީ" ]
--- widget: - text: 'ގޫގަލް ޕިކްސަލް 6 ގެ ކެމެރާ، އޭއައި ގެ ޖާދޫއިން ފުރިފައި' --- # The [ELECTRA-small](https://huggingface.co/ashraq/dv-electra-small) fine-tuned for news classification in Dhivehi
728
astarostap/autonlp-antisemitism-2-21194454
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "the jews have a lot of power" datasets: - astarostap/autonlp-data-antisemitism-2 co2_eq_emissions: 2.0686690092905224 --- # Description This model takes a tweet with the word "jew" in it, and determines if it's antisemitic. Training data: This model was trained on 4k tweets, where ~50% were labeled as antisemitic. I labeled them myself based on personal experience and knowledge about common antisemitic tropes. Note: The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts. Please keep in mind that I'm not an expert on antisemitism or hatespeech. Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech. If you would like to collaborate on antisemitism detection, please feel free to contact me at starosta@alumni.stanford.edu This model is not ready for production, it needs more evaluation and more training data. # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 21194454 - CO2 Emissions (in grams): 2.0686690092905224 - Dataset: https://huggingface.co/datasets/astarostap/autonlp-data-antisemitism-2 ## Validation Metrics - Loss: 0.5291365385055542 - Accuracy: 0.7572692793931732 - Precision: 0.7126948775055679 - Recall: 0.835509138381201 - AUC: 0.8185826549941126 - F1: 0.7692307692307693 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/astarostap/autonlp-antisemitism-2-21194454 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("astarostap/autonlp-antisemitism-2-21194454", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("astarostap/autonlp-antisemitism-2-21194454", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
731
aubmindlab/aragpt2-mega-detector-long
[ "human-written", "machine-generated" ]
--- language: ar widget: - text: "وإذا كان هناك من لا يزال يعتقد أن لبنان هو سويسرا الشرق ، فهو مخطئ إلى حد بعيد . فلبنان ليس سويسرا ، ولا يمكن أن يكون كذلك . لقد عاش اللبنانيون في هذا البلد منذ ما يزيد عن ألف وخمسمئة عام ، أي منذ تأسيس الإمارة الشهابية التي أسسها الأمير فخر الدين المعني الثاني ( 1697 - 1742 )" --- # AraGPT2 Detector Machine generated detector model from the [AraGPT2: Pre-Trained Transformer for Arabic Language Generation paper](https://arxiv.org/abs/2012.15520) This model is trained on the long text passages, and achieves a 99.4% F1-Score. # How to use it: ```python from transformers import pipeline from arabert.preprocess import ArabertPreprocessor processor = ArabertPreprocessor(model="aubmindlab/araelectra-base-discriminator") pipe = pipeline("sentiment-analysis", model = "aubmindlab/aragpt2-mega-detector-long") text = " " text_prep = processor.preprocess(text) result = pipe(text_prep) # [{'label': 'machine-generated', 'score': 0.9977743625640869}] ``` # If you used this model please cite us as : ``` @misc{antoun2020aragpt2, title={AraGPT2: Pre-Trained Transformer for Arabic Language Generation}, author={Wissam Antoun and Fady Baly and Hazem Hajj}, year={2020}, eprint={2012.15520}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com>
732
avichr/heBERT_sentiment_analysis
[ "neutral", "negative", "positive" ]
## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition HeBERT is a Hebrew pre-trained language model. It is based on Google's BERT architecture and it is BERT-Base config [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). <br> HeBert was trained on three datasets: 1. A Hebrew version of OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 million sentences. 2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 million words and 3.8 million sentences 3. Emotion UGC data was collected for the purpose of this study. (described below) We evaluated the model on emotion recognition and sentiment analysis, for downstream tasks. ### Emotion UGC Data Description Our User-Generated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020, Total data size of ~150 MB of data, including over 7 million words and 350K sentences. 4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation, fear, happy, sadness, surprise, and trust) and overall sentiment/polarity <br> In order to validate the annotation, we search for an agreement between raters to emotion in each sentence using Krippendorff's alpha [(krippendorff, 1970)](https://journals.sagepub.com/doi/pdf/10.1177/001316447003000105). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotions like happiness, trust, and disgust, there are few emotions with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise). ### Performance #### sentiment analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | natural | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | ## How to use ### For masked-LM model (can be fine-tunned to any down-stream task) ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT") model = AutoModel.from_pretrained("avichr/heBERT") from transformers import pipeline fill_mask = pipeline( "fill-mask", model="avichr/heBERT", tokenizer="avichr/heBERT" ) fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.") ``` ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) >>> sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') [[{'label': 'natural', 'score': 0.9978172183036804}, {'label': 'positive', 'score': 0.0014792329166084528}, {'label': 'negative', 'score': 0.0007035882445052266}]] >>> sentiment_analysis('קפה זה טעים') [[{'label': 'natural', 'score': 0.00047328314394690096}, {'label': 'possitive', 'score': 0.9994067549705505}, {'label': 'negetive', 'score': 0.00011996887042187154}]] >>> sentiment_analysis('אני לא אוהב את העולם') [[{'label': 'natural', 'score': 9.214012970915064e-05}, {'label': 'possitive', 'score': 8.876807987689972e-05}, {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Our model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda) ## Stay tuned! We are still working on our model and will edit this page as we progress.<br> Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br> our git: https://github.com/avichaychriqui/HeBERT ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. ``` @article{chriqui2021hebert, title={HeBERT \\\\\\\\\\\\\\\\& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={arXiv preprint arXiv:2102.01909}, year={2021} } ```
743
ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa
[ "Positive", "Neutral", "Negative" ]
--- license: mit tags: - generated_from_trainer datasets: - indonlu metrics: - accuracy model-index: - name: bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa results: - task: name: Text Classification type: text-classification dataset: name: indonlu type: indonlu args: smsa metrics: - name: Accuracy type: accuracy value: 0.9373015873015873 language: id widget: - text: "Saya mengapresiasi usaha anda" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.3390 - Accuracy: 0.9373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2864 | 1.0 | 688 | 0.2154 | 0.9286 | | 0.1648 | 2.0 | 1376 | 0.2238 | 0.9357 | | 0.0759 | 3.0 | 2064 | 0.3351 | 0.9365 | | 0.044 | 4.0 | 2752 | 0.3390 | 0.9373 | | 0.0308 | 5.0 | 3440 | 0.4346 | 0.9365 | | 0.0113 | 6.0 | 4128 | 0.4708 | 0.9365 | | 0.006 | 7.0 | 4816 | 0.5533 | 0.9325 | | 0.0047 | 8.0 | 5504 | 0.5888 | 0.9310 | | 0.0001 | 9.0 | 6192 | 0.5961 | 0.9333 | | 0.0 | 10.0 | 6880 | 0.5992 | 0.9357 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
744
ayameRushia/indobert-base-uncased-finetuned-indonlu-smsa
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- license: mit tags: - generated_from_trainer datasets: - indonlu metrics: - accuracy - f1 - precision - recall model-index: - name: indobert-base-uncased-finetuned-indonlu-smsa results: - task: name: Text Classification type: text-classification dataset: name: indonlu type: indonlu args: smsa metrics: - name: Accuracy type: accuracy value: 0.9301587301587302 - name: F1 type: f1 value: 0.9066105299178986 - name: Precision type: precision value: 0.8992078788375845 - name: Recall type: recall value: 0.9147307323234121 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # indobert-base-uncased-finetuned-indonlu-smsa This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.2277 - Accuracy: 0.9302 - F1: 0.9066 - Precision: 0.8992 - Recall: 0.9147 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 344 | 0.3831 | 0.8476 | 0.7715 | 0.7817 | 0.7627 | | 0.4167 | 2.0 | 688 | 0.2809 | 0.8905 | 0.8406 | 0.8699 | 0.8185 | | 0.2624 | 3.0 | 1032 | 0.2254 | 0.9230 | 0.8842 | 0.9004 | 0.8714 | | 0.2624 | 4.0 | 1376 | 0.2378 | 0.9238 | 0.8797 | 0.9180 | 0.8594 | | 0.1865 | 5.0 | 1720 | 0.2277 | 0.9302 | 0.9066 | 0.8992 | 0.9147 | | 0.1217 | 6.0 | 2064 | 0.2444 | 0.9262 | 0.8981 | 0.9013 | 0.8957 | | 0.1217 | 7.0 | 2408 | 0.2985 | 0.9286 | 0.8999 | 0.9035 | 0.8971 | | 0.0847 | 8.0 | 2752 | 0.3397 | 0.9278 | 0.8969 | 0.9090 | 0.8871 | | 0.0551 | 9.0 | 3096 | 0.3542 | 0.9270 | 0.8961 | 0.9010 | 0.8924 | | 0.0551 | 10.0 | 3440 | 0.3862 | 0.9222 | 0.8895 | 0.8970 | 0.8846 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
745
ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa
[ "POSITIVE", "NEUTRAL", "NEGATIVE" ]
--- tags: - generated_from_trainer datasets: - indonlu metrics: - accuracy model-index: - name: roberta-base-indonesian-1.5G-sentiment-analysis-smsa results: - task: name: Text Classification type: text-classification dataset: name: indonlu type: indonlu args: smsa metrics: - name: Accuracy type: accuracy value: 0.9261904761904762 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-indonesian-1.5G-sentiment-analysis-smsa This model is a fine-tuned version of [cahya/roberta-base-indonesian-1.5G](https://huggingface.co/cahya/roberta-base-indonesian-1.5G) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.4294 - Accuracy: 0.9262 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6461 | 1.0 | 688 | 0.2620 | 0.9087 | | 0.2627 | 2.0 | 1376 | 0.2291 | 0.9151 | | 0.1784 | 3.0 | 2064 | 0.2891 | 0.9167 | | 0.1099 | 4.0 | 2752 | 0.3317 | 0.9230 | | 0.0857 | 5.0 | 3440 | 0.4294 | 0.9262 | | 0.0346 | 6.0 | 4128 | 0.4759 | 0.9246 | | 0.0221 | 7.0 | 4816 | 0.4946 | 0.9206 | | 0.006 | 8.0 | 5504 | 0.5823 | 0.9175 | | 0.0047 | 9.0 | 6192 | 0.5777 | 0.9159 | | 0.004 | 10.0 | 6880 | 0.5800 | 0.9175 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
746
ayameRushia/roberta-base-indonesian-sentiment-analysis-smsa
[ "POSITIVE", "NEUTRAL", "NEGATIVE" ]
--- license: mit tags: - generated_from_trainer datasets: - indonlu metrics: - accuracy model-index: - name: roberta-base-indonesian-sentiment-analysis-smsa results: - task: name: Text Classification type: text-classification dataset: name: indonlu type: indonlu args: smsa metrics: - name: Accuracy type: accuracy value: 0.9349206349206349 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-indonesian-sentiment-analysis-smsa This model is a fine-tuned version of [flax-community/indonesian-roberta-base](https://huggingface.co/flax-community/indonesian-roberta-base) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.4252 - Accuracy: 0.9349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7582 | 1.0 | 688 | 0.3280 | 0.8786 | | 0.3225 | 2.0 | 1376 | 0.2398 | 0.9206 | | 0.2057 | 3.0 | 2064 | 0.2574 | 0.9230 | | 0.1642 | 4.0 | 2752 | 0.2820 | 0.9302 | | 0.1266 | 5.0 | 3440 | 0.3344 | 0.9317 | | 0.0608 | 6.0 | 4128 | 0.3543 | 0.9341 | | 0.058 | 7.0 | 4816 | 0.4252 | 0.9349 | | 0.0315 | 8.0 | 5504 | 0.4736 | 0.9310 | | 0.0166 | 9.0 | 6192 | 0.4649 | 0.9349 | | 0.0143 | 10.0 | 6880 | 0.4648 | 0.9341 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
747
aychang/bert-base-cased-trec-coarse
[ "ABBR", "DESC", "ENTY", "HUM", "LOC", "NUM" ]
--- language: - en license: mit tags: - text-classification datasets: - trec model-index: - name: aychang/bert-base-cased-trec-coarse results: - task: type: text-classification name: Text Classification dataset: name: trec type: trec config: default split: test metrics: - type: accuracy value: 0.974 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTUwZTU1ZGU5YTRiMzNhNmQyMjNlY2M5YjAwN2RlMmYxODI2MjFkY2Q3NWFjZDg3Zjg5ZDk1Y2I1MTUxYjFhMCIsInZlcnNpb24iOjF9.GJkxJOFhsO4UaoHpHH1136Qj_fu9UQ9o3DThtT46hvMduswkgobl9iz6ICYQ7IdYKFbh3zRTlsZzjnAlzGqdBA - type: precision value: 0.9793164100816639 name: Precision Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTMxMjI3NWZhOGZkODJmYzkxYzdhZWIwMTBkZTg4YWZiNjcwNTVmM2RjYmQ3ZmNhZjM2MWQzYTUzNzFlMjQzOCIsInZlcnNpb24iOjF9.n45s1_gW040u5f2y-zfVx_5XU-J97dcuWlmaIZsJsCetcHtrjsbHut2gAcPxErl8UPTXSq1XDg5WWug4FPM8CQ - type: precision value: 0.974 name: Precision Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY5ZTZiNmYzZDQzYWZiZDdlNDllZWQ4NTVjZWZlYWJkZDgyNGNhZjAzOTZjZDc0NDUwMTE3ODVlMjFjNTIxZCIsInZlcnNpb24iOjF9.4lR7MgvxxTblEV4LZGbko-ylIeFjcjNM5P21iYH6vkNkjItIfiXmKbL55_Zeab4oGJ5ytWz0rIdlpNnmmV29Cw - type: precision value: 0.9746805065928548 name: Precision Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDEzYmZmZDIyNDFmNzJmODQ2ODdhYTUyYzQyZjEzZTdhMjg3MTllOGFkNGRlMDFhYzI4ZGE5OTExNjk1ZTI5OSIsInZlcnNpb24iOjF9.Ti5gL3Tk9hCpriIUhB8ltdKRibSilvRZOxAlLCgAkrhg0dXGE5f4n8almCAjbRJEaPW6H6581PhuUfjgMqceBw - type: recall value: 0.9783617516169679 name: Recall Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWUwMGUwYmY3MWQwOTcwYjI2Yjc3Yzc1YWQ1YjU2ODY3MzAyMDdkNmM3MmFhZmMxZWFhMTUxNzZlNzViMDA0ZiIsInZlcnNpb24iOjF9.IWhPl9xS5pqEaFHKsBZj6JRtJRpQZQqJhQYW6zmtPi2F3speRsKc0iksfHkmPjm678v-wKUJ4zyGfRs-63HmBg - type: recall value: 0.974 name: Recall Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjlhMDY0MmI2NzBiMWY5NTcwYjZlYzE5ODg0ODk1ZTBjZDI4YmZiY2RmZWVlZGUxYzk2MDQ4NjRkMTQ4ZTEzZiIsInZlcnNpb24iOjF9.g5p5b0BqyZxb7Hk9DayRndhs5F0r44h8TXMJDaP6IoFdYzlBfEcZv7UkCu6s6laz9-F-hhZHUZii2ljtYasVAA - type: recall value: 0.974 name: Recall Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjJjNTE2ZWFjMGYyZGUzOWI3MDRhM2I2MTRjZGNkOWZkZDJhNzQ4OTYwOTQ2NDY5OGNjZTZhOWU2MzlhNTY5YyIsInZlcnNpb24iOjF9.JnRFkZ-v-yRhCf6di7ONcy_8Tv0rNXQir1TVw-cU9fNY1c4vKRmGaKmLGeR7TxpmKzEQtikb6mFwRwhIAhl8AA - type: f1 value: 0.9783635353409951 name: F1 Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjM2NDY3MmUyMmEyZjg5MWZhNjllOGRlNWVkYzgyYmM5ZDBmMDdhYmY5NDAxZmYwMjA0YTkzNTI2MjU0NTRlZiIsInZlcnNpb24iOjF9.HlbHjJa-bpYPjujWODpvfLVMtCnNQMDBCYpLGokfBoXibZGKfIzXcgNdXLdJ-DkmMUriX3wVZtGcRvA2ErUeDw - type: f1 value: 0.974 name: F1 Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjMxNDE4MTBmYzU2MTllMjlhNTcwYWJhMzRkNTE2ZGFiNmQ0ZTEyOWJhMmU2ZDliYTIzNDExYTM5MTAxYjcxNSIsInZlcnNpb24iOjF9.B7G9Gs74MosZPQ16QH2k-zrmlE8KCtIFu3BcrgObYiuqOz1aFURS3IPoOynVFLp1jnJtgQAmQRY_GDumSS-oDg - type: f1 value: 0.97377371266232 name: F1 Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmEyNjRlYmE5M2U1OWY0OGY2YjQyN2E0NmQxNjY0NTY3N2JiZmMwOWQ1ZTMzZDcwNTdjNWYwNTRiNTljNjMxMiIsInZlcnNpb24iOjF9.VryHh8G_ZvoiSm1SZRMw4kheGWuI3rQ6GUVqm2uf-kkaSU20rYMW20-VKCtwayLcrIHJ92to6YvvW7yI0Le5DA - type: loss value: 0.13812002539634705 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjk4MDQ5NGRiNTExYmE3NGU1ZmQ1YjUzMTQ4NzUwNWViYzFiODEzMjc2MDA2MzYyOGNjNjYxYzliNDM4Y2U0ZSIsInZlcnNpb24iOjF9.u68ogPOH6-_pb6ZVulzMVfHIfFlLwBeDp8H4iqgfBadjwj2h-aO0jzc4umWFWtzWespsZvnlDjklbhhgrd1vCQ --- # bert-base-cased trained on TREC 6-class task ## Model description A simple base BERT model trained on the "trec" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Use pipeline from transformers import pipeline model_name = "aychang/bert-base-cased-trec-coarse" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]) ``` ##### AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/bert-base-cased-trec-coarse" texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ``` #### Limitations and bias This is minimal language model trained on a benchmark dataset. ## Training data TREC https://huggingface.co/datasets/trec ## Training procedure Preprocessing, hardware used, hyperparameters... #### Hardware One V100 #### Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', num_train_epochs=2, per_device_train_batch_size=16, per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', save_steps=3000 ) ``` ## Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.974, 'eval_f1': array([0.98181818, 0.94444444, 1. , 0.99236641, 0.96995708, 0.98159509]), 'eval_loss': 0.138086199760437, 'eval_precision': array([0.98540146, 0.98837209, 1. , 0.98484848, 0.94166667, 0.97560976]), 'eval_recall': array([0.97826087, 0.90425532, 1. , 1. , 1. , 0.98765432]), 'eval_runtime': 1.6132, 'eval_samples_per_second': 309.943} ```