modelId
stringlengths
6
107
label
list
readme
stringlengths
0
56.2k
readme_len
int64
0
56.2k
dweb/deberta-base-CoLA
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: deberta-base-CoLA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-CoLA This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1655 - Accuracy: 0.8482 - F1: 0.8961 - Roc Auc: 0.8987 - Mcc: 0.6288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Roc Auc | Mcc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:-------:|:------:| | 0.5266 | 1.0 | 535 | 0.4138 | 0.8159 | 0.8698 | 0.8627 | 0.5576 | | 0.3523 | 2.0 | 1070 | 0.3852 | 0.8387 | 0.8880 | 0.9041 | 0.6070 | | 0.2479 | 3.0 | 1605 | 0.3981 | 0.8482 | 0.8901 | 0.9120 | 0.6447 | | 0.1712 | 4.0 | 2140 | 0.4732 | 0.8558 | 0.9008 | 0.9160 | 0.6486 | | 0.1354 | 5.0 | 2675 | 0.7181 | 0.8463 | 0.8938 | 0.9024 | 0.6250 | | 0.0876 | 6.0 | 3210 | 0.8453 | 0.8520 | 0.8992 | 0.9123 | 0.6385 | | 0.0682 | 7.0 | 3745 | 1.0282 | 0.8444 | 0.8938 | 0.9061 | 0.6189 | | 0.0431 | 8.0 | 4280 | 1.1114 | 0.8463 | 0.8960 | 0.9010 | 0.6239 | | 0.0323 | 9.0 | 4815 | 1.1663 | 0.8501 | 0.8970 | 0.8967 | 0.6340 | | 0.0163 | 10.0 | 5350 | 1.1655 | 0.8482 | 0.8961 | 0.8987 | 0.6288 | ### Framework versions - Transformers 4.11.0 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
2,310
edwardgowsmith/pt-finegrained-one-shot
null
Entry not found
15
eliza-dukim/bert-base-finetuned-sts
[ "LABEL_0" ]
--- tags: - generated_from_trainer datasets: - klue metrics: - pearsonr - f1 model-index: - name: bert-base-finetuned-sts results: - task: name: Text Classification type: text-classification dataset: name: klue type: klue args: sts metrics: - name: Pearsonr type: pearsonr value: 0.8756147003619346 - name: F1 type: f1 value: 0.8416666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-finetuned-sts This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.4115 - Pearsonr: 0.8756 - F1: 0.8417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearsonr | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7836 | 1.0 | 365 | 0.5507 | 0.8435 | 0.8121 | | 0.1564 | 2.0 | 730 | 0.4396 | 0.8495 | 0.8136 | | 0.0989 | 3.0 | 1095 | 0.4115 | 0.8756 | 0.8417 | | 0.0682 | 4.0 | 1460 | 0.4466 | 0.8746 | 0.8449 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.12.1 - Tokenizers 0.10.3
1,898
espejelomar/BETO_Clasificar_Tweets_Mexicano
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
gchhablani/fnet-large-finetuned-cola
[ "acceptable", "unacceptable" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: fnet-large-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-large-finetuned-cola This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6243 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6195 | 1.0 | 2138 | 0.6527 | 0.0 | | 0.6168 | 2.0 | 4276 | 0.6259 | 0.0 | | 0.616 | 3.0 | 6414 | 0.6243 | 0.0 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
1,819
imzachjohnson/autonlp-spinner-check-16492731
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - imzachjohnson/autonlp-data-spinner-check --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 16492731 ## Validation Metrics - Loss: 0.21610039472579956 - Accuracy: 0.9155366722657816 - Precision: 0.9530714194995978 - Recall: 0.944871149164778 - AUC: 0.9553238723676906 - F1: 0.9489535692456846 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/imzachjohnson/autonlp-spinner-check-16492731 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("imzachjohnson/autonlp-spinner-check-16492731", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("imzachjohnson/autonlp-spinner-check-16492731", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,107
jaesun/distilbert-base-uncased-finetuned-cola
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.51728018358102 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8815 - Matthews Correlation: 0.5173 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5272 | 1.0 | 535 | 0.5099 | 0.4093 | | 0.3563 | 2.0 | 1070 | 0.5114 | 0.5019 | | 0.2425 | 3.0 | 1605 | 0.6696 | 0.4898 | | 0.1726 | 4.0 | 2140 | 0.7715 | 0.5123 | | 0.132 | 5.0 | 2675 | 0.8815 | 0.5173 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.14.0 - Tokenizers 0.10.3
1,991
kittinan/exercise-feedback-classification
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
# Reddit exercise feedback classification Model to classify Reddit's comments for exercise feedback. Current classes are good, correction, bad posture, not informative. If you want to use it locally, ### Usage: ```py from transformers import pipeline classifier = pipeline("text-classification", "kittinan/exercise-feedback-classification") classifier("search for alan thrall deadlift video he will explain basic ques") #[{'label': 'correction', 'score': 0.9998193979263306}] ```
481
lysandre/dum
[ "NEGATIVE", "POSITIVE" ]
--- language: en license: apache-2.0 datasets: - sst2 --- # Sentiment Analysis This is a BERT model fine-tuned for sentiment analysis.
137
mmcquade11/reviews-sentiment-analysis
null
Entry not found
15
serdarakyol/interpress-turkish-news-classification
[ "Culture-Art", "Economy", "Politics", "Education", "World", "Sport", "Technology", "Magazine", "Health", "Agenda" ]
--- language: tr Dataset: interpress_news_category_tr --- # INTERPRESS NEWS CLASSIFICATION ## Dataset The dataset downloaded from interpress. This dataset is real world data. Actually there are 273K data but I filtered them and used 108K data for this model. For more information about dataset please visit this [link](https://huggingface.co/datasets/interpress_news_category_tr_lite) ## Model Model accuracy on train data and validation data is %97. The data split as %80 train and %20 validation. The results as shown as below ### Classification report ![Classification report](classification_report.png) ### Confusion matrix ![Confusion matrix](confusion_matrix.png) ## Usage for Torch ```sh pip install transformers or pip install transformers==4.3.3 ``` ```sh from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("serdarakyol/interpress-turkish-news-classification") model = AutoModelForSequenceClassification.from_pretrained("serdarakyol/interpress-turkish-news-classification") ``` ```sh import torch if torch.cuda.is_available(): device = torch.device("cuda") model = model.cuda() print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('GPU name is:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") ``` ```sh import numpy as np def prediction(news): news=[news] indices=tokenizer.batch_encode_plus( news, max_length=512, add_special_tokens=True, return_attention_mask=True, padding='max_length', truncation=True, return_tensors='pt') inputs = indices["input_ids"].clone().detach().to(device) masks = indices["attention_mask"].clone().detach().to(device) with torch.no_grad(): output = model(inputs, token_type_ids=None,attention_mask=masks) logits = output[0] logits = logits.detach().cpu().numpy() pred = np.argmax(logits,axis=1)[0] return pred ``` ```sh news = r"ABD'den Prens Selman'a yaptırım yok Beyaz Saray Sözcüsü Psaki, Muhammed bin Selman'a yaptırım uygulamamanın \"doğru karar\" olduğunu savundu. Psaki, \"Tarihimizde, Demokrat ve Cumhuriyetçi başkanların yönetimlerinde diplomatik ilişki içinde olduğumuz ülkelerin liderlerine yönelik yaptırım getirilmemiştir\" dedi." ``` You can find the news in this [link](https://www.ntv.com.tr/dunya/abdden-prens-selmana-yaptirim-yok,YTeWNv0-oU6Glbhnpjs1JQ) (news date: 02/03/2021) ```sh labels = { 0 : "Culture-Art", 1 : "Economy", 2 : "Politics", 3 : "Education", 4 : "World", 5 : "Sport", 6 : "Technology", 7 : "Magazine", 8 : "Health", 9 : "Agenda" } pred = prediction(news) print(labels[pred]) # > World ``` ## Usage for Tensorflow ```sh pip install transformers or pip install transformers==4.3.3 import tensorflow as tf from transformers import BertTokenizer, TFBertForSequenceClassification import numpy as np tokenizer = BertTokenizer.from_pretrained('serdarakyol/interpress-turkish-news-classification') model = TFBertForSequenceClassification.from_pretrained("serdarakyol/interpress-turkish-news-classification") inputs = tokenizer(news, return_tensors="tf") inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1 outputs = model(inputs) loss = outputs.loss logits = outputs.logits pred = np.argmax(logits,axis=1)[0] labels[pred] # > World ``` Thanks to [@yavuzkomecoglu](https://huggingface.co/yavuzkomecoglu) for contributes If you have any question, please, don't hesitate to contact with me [![linkedin](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/serdarakyol55/) [![Github](https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white)](https://github.com/serdarakyol)
3,876
sismetanin/sbert-ru-sentiment-rutweetcorp
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
wilsontam/bert-base-uncased-dstc10-knowledge-cluster-classifier
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28", "LABEL_29",...
--- language: "en" tags: - dstc10 - knowledge cluster classifier widget: - text: "oh and we'll mi thing uh is there bike clo ars or bike crac where i can park my thee" - text: "oh and one more thing uhhh is there bike lockers or a bike rack where i can park my bike" - text: "ni yeah that sounds great ummm dold you have the any idea er could you check for me if there's hat three wifie available there" - text: "nice yeah that sounds great ummm do you have any idea or could you check for me if there's uhhh free wi-fi available there" - text: "perfect and what is the check kin time for that" --- This is the model used for knowledge cluster classification for the DSTC10 track2 knowledge selection task, trained with double heads, i.e., classifier head and LM head using ASR error simulator for model training. For further information, please refer to https://github.com/yctam/dstc10_track2_task2 for the Github repository. You can use this model and use our source code to predict knowledge clusters under ASR errors. AAAI 2022 workshop paper: https://github.com/shanemoon/dstc10/raw/main/papers/dstc10_aaai22_track2_21.pdf ---
1,133
inovex/multi2convai-logistics-de-bert
[ "details.address", "tour.postcode.select", "tour.finish", "details.safeplace", "details.preferedNeighbour", "details.avoidNeighbour", "tour.job.collected", "no", "yes", "tour.start", "tour.details", "tour.job.signature", "tour.job.delivered", "select", "tour.job.safePlace", "safeplace"...
--- tags: - text-classification widget: - text: "Wo kann ich das Paket ablegen?" license: mit language: de --- # Multi2ConvAI-Logistics: finetuned Bert for German This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
984
MhF/distilbert-base-uncased-finetuned-clinc
[ "accept_reservations", "account_blocked", "alarm", "application_status", "apr", "are_you_a_bot", "balance", "bill_balance", "bill_due", "book_flight", "book_hotel", "calculator", "calendar", "calendar_update", "calories", "cancel", "cancel_reservation", "car_rental", "card_declin...
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9187096774193548 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7703 - Accuracy: 0.9187 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 | | 2.6309 | 2.0 | 636 | 1.8797 | 0.8310 | | 1.5443 | 3.0 | 954 | 1.1537 | 0.8974 | | 1.0097 | 4.0 | 1272 | 0.8560 | 0.9135 | | 0.7918 | 5.0 | 1590 | 0.7703 | 0.9187 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
1,890
ffalcao/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9246964318251509 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2237 - Accuracy: 0.9245 - F1: 0.9247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8356 | 1.0 | 250 | 0.3296 | 0.901 | 0.8977 | | 0.254 | 2.0 | 500 | 0.2237 | 0.9245 | 0.9247 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,807
ali2066/bert_base_uncased_itr0_0.0001_webDiscourse_01_03_2022-16_08_12
null
Entry not found
15
adit94/relevancy_classifier
null
{'junk': 0, 'relevant': 1}
27
gdario/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.8955 - name: F1 type: f1 value: 0.8918003951340884 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3662 - Accuracy: 0.8955 - F1: 0.8918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 125 | 0.5675 | 0.8265 | 0.8067 | | 0.7565 | 2.0 | 250 | 0.3662 | 0.8955 | 0.8918 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
1,803
ebrigham/EYY-Topic-Classification
[ "climate change", "culture", "democratic values", "digital", "education", "employment and inclusion", "european learning mobility", "health and well-being", "n/a", "natural sustainability", "participation and engagement", "policy dialogues", "renewable energy", "research and innovation", ...
Entry not found
15
mrm8488/spanish-TinyBERT-betito-finetuned-xnli-es
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- tags: - generated_from_trainer datasets: - xnli metrics: - accuracy model-index: - name: spanish-TinyBERT-betito-finetuned-xnli-es results: - task: name: Text Classification type: text-classification dataset: name: xnli type: xnli args: es metrics: - name: Accuracy type: accuracy value: 0.7475049900199601 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanish-TinyBERT-betito-finetuned-xnli-es This model is a fine-tuned version of [mrm8488/spanish-TinyBERT-betito](https://huggingface.co/mrm8488/spanish-TinyBERT-betito) on the xnli dataset. It achieves the following results on the evaluation set: - Loss: 0.7104 - Accuracy: 0.7475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.50838112218154e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.7191 | 1.0 | 49399 | 0.6829 | 0.7112 | | 0.6323 | 2.0 | 98798 | 0.6527 | 0.7305 | | 0.5727 | 3.0 | 148197 | 0.6531 | 0.7465 | | 0.4964 | 4.0 | 197596 | 0.7079 | 0.7427 | | 0.4929 | 5.0 | 246995 | 0.7104 | 0.7475 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
1,896
orzhan/ruroberta-ruatd-binary
null
sberbank-ai/ruRoberta-large fine-tuned for Russian Artificial Text Detection shared task
89
Kaveh8/autonlp-imdb_rating-625417974
[ "1", "2", "3", "4", "5" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - Kaveh8/autonlp-data-imdb_rating co2_eq_emissions: 0.7952957276830314 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 625417974 - CO2 Emissions (in grams): 0.7952957276830314 ## Validation Metrics - Loss: 1.0167548656463623 - Accuracy: 0.5934065934065934 - Macro F1: 0.5871237509176406 - Micro F1: 0.5934065934065934 - Weighted F1: 0.5905118014752566 - Macro Precision: 0.5959908336094294 - Micro Precision: 0.5934065934065934 - Weighted Precision: 0.5979368174068634 - Macro Recall: 0.5884714803600252 - Micro Recall: 0.5934065934065934 - Weighted Recall: 0.5934065934065934 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Kaveh8/autonlp-imdb_rating-625417974 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Kaveh8/autonlp-imdb_rating-625417974", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Kaveh8/autonlp-imdb_rating-625417974", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,374
saattrupdan/job-listing-filtering-model
null
--- license: mit tags: - generated_from_trainer model-index: - name: job-listing-filtering-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # job-listing-filtering-model This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4639 | 1.55 | 50 | 0.4343 | | 0.407 | 3.12 | 100 | 0.3589 | | 0.3459 | 4.68 | 150 | 0.3110 | | 0.2871 | 6.25 | 200 | 0.2604 | | 0.1966 | 7.8 | 250 | 0.2004 | | 0.0994 | 9.37 | 300 | 0.1766 | | 0.0961 | 10.92 | 350 | 0.2007 | | 0.0954 | 12.49 | 400 | 0.1716 | | 0.0498 | 14.06 | 450 | 0.1642 | | 0.0419 | 15.62 | 500 | 0.1811 | | 0.0232 | 17.18 | 550 | 0.1872 | | 0.0146 | 18.74 | 600 | 0.1789 | | 0.0356 | 20.31 | 650 | 0.1984 | | 0.0325 | 21.86 | 700 | 0.1845 | | 0.0381 | 23.43 | 750 | 0.1994 | | 0.0063 | 24.98 | 800 | 0.1992 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
2,091
FuriouslyAsleep/unhappyZebra100
[ "False", "True" ]
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - FuriouslyAsleep/autotrain-data-techDataClassifeier co2_eq_emissions: 0.6969569001670619 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 664919631 - CO2 Emissions (in grams): 0.6969569001670619 ## Validation Metrics - Loss: 0.022509008646011353 - Accuracy: 1.0 - Precision: 1.0 - Recall: 1.0 - AUC: 1.0 - F1: 1.0 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/FuriouslyAsleep/autotrain-techDataClassifeier-664919631 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("FuriouslyAsleep/autotrain-techDataClassifeier-664919631", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("FuriouslyAsleep/autotrain-techDataClassifeier-664919631", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,172
Aymene/Fake-news-detection-bert-based-uncased
[ "LABEL_0" ]
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Fake-news-detection-bert-based-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Fake-news-detection-bert-based-uncased This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.6
1,032
Giyaseddin/distilbert-base-cased-finetuned-fake-and-real-news-dataset
null
--- license: gpl-3.0 language: en library: transformers other: distilbert datasets: - Fake and real news dataset --- # DistilBERT base cased model for Fake News Classification ## Model description DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts using the BERT base model. This is a Fake News classification model finetuned [pretrained DistilBERT model](https://huggingface.co/distilbert-base-cased) on [Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) ## Intended uses & limitations This can only be used for the kind of news that are similar to the ones in the dataset, please visit the [dataset's kaggle page](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) to see the data. ### How to use You can use this model directly with a : ```python >>> from transformers import pipeline >>> classifier = pipeline("text-classification", model="Giyaseddin/distilbert-base-cased-finetuned-fake-and-real-news-dataset", return_all_scores=True) >>> examples = ["Yesterday, Speaker Paul Ryan tweeted a video of himself on the Mexican border flying in a helicopter and traveling on horseback with US border agents. RT if you agree It is time for The Wall. pic.twitter.com/s5MO8SG7SL Paul Ryan (@SpeakerRyan) August 1, 2017It makes for great theater to see Republican Speaker Ryan pleading the case for a border wall, but how sincere are the GOP about building the border wall? Even after posting a video that appears to show Ryan s support for the wall, he still seems unsure of himself. It s almost as though he s testing the political winds when he asks Twitter users to retweet if they agree that we need to start building the wall. How committed is the (formerly?) anti-Trump Paul Ryan to building the border wall that would fulfill one of President Trump s most popular campaign promises to the American people? Does he have the what it takes to defy the wishes of corporate donors and the US Chamber of Commerce, and do the right thing for the national security and well-being of our nation?The Last Refuge- Republicans are in control of the House of Representatives, Republicans are in control of the Senate, a Republican President is in the White House, and somehow there s negotiations on how to fund the #1 campaign promise of President Donald Trump, the border wall.Here s the rub.Here s what pundits never discuss.The Republican party doesn t need a single Democrat to fund the border wall.A single spending bill could come from the House of Representatives that fully funds 100% of the border wall. The spending bill then goes to the senate, where again, it doesn t need a single Democrat vote because spending legislation is specifically what reconciliation was designed to facilitate. That House bill can pass the Senate with 51 votes and proceed directly to the President s desk for signature.So, ask yourself: why is this even a point of discussion?The honest answer, for those who are no longer suffering from Battered Conservative Syndrome, is that Republicans don t want to fund or build an actual physical barrier known as the Southern Border Wall.It really is that simple.If one didn t know better, they d almost think Speaker Ryan was attempting to emulate the man he clearly despised during the 2016 presidential campaign."] >>> classifier(examples) [[{'label': 'LABEL_0', 'score': 1.0}, {'label': 'LABEL_1', 'score': 1.0119109106199176e-08}]] ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. It also inherits some of [the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias). This bias will also affect all fine-tuned versions of this model. ## Pre-training data DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Fine-tuning data [Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) ## Training procedure ### Preprocessing In the preprocessing phase, both the title and the text of the news are concatenated using a separator `[SEP]`. This makes the full text as: ``` [CLS] Title Sentence [SEP] News text body [SEP] ``` The data are splitted according to the following ratio: - Training set 60%. - Validation set 20%. - Test set 20%. Lables are mapped as: `{fake: 0, true: 1}` ### Fine-tuning The model was finetuned on GeForce GTX 960M for 5 hours. The parameters are: | Parameter | Value | |:-------------------:|:-----:| | Learning rate | 5e-5 | | Weight decay | 0.01 | | Training batch size | 4 | | Epochs | 3 | Here is the scores during the training: | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | |:----------:|:-------------:|:-----------------:|:----------:|:---------:|:-----------:|:---------:| | 1 | 0.008300 | 0.005783 | 0.998330 | 0.998252 | 0.996511 | 1.000000 | | 2 | 0.000000 | 0.000161 | 0.999889 | 0.999883 | 0.999767 | 1.000000 | | 3 | 0.000000 | 0.000122 | 0.999889 | 0.999883 | 0.999767 | 1.000000 | ## Evaluation results When fine-tuned on downstream task of fake news binary classification, this model achieved the following results: (scores are rounded to 2 floating points) | | precision | recall | f1-score | support | |:------------:|:---------:|:------:|:--------:|:-------:| | Fake | 1.00 | 1.00 | 1.00 | 4697 | | True | 1.00 | 1.00 | 1.00 | 4283 | | accuracy | - | - | 1.00 | 8980 | | macro avg | 1.00 | 1.00 | 1.00 | 8980 | | weighted avg | 1.00 | 1.00 | 1.00 | 8980 | Confision matrix: | Actual\Predicted | Fake | True | |:-----------------:|:----:|:----:| | Fake | 4696 | 1 | | True | 1 | 4282 | The AUC score is 0.9997
6,720
hackathon-pln-es/readability-es-paragraphs
[ "complex", "simple" ]
--- language: es license: cc-by-4.0 tags: - spanish - roberta - bertin pipeline_tag: text-classification widget: - text: La cueva de Zaratustra en el Pretil de los Consejos. Rimeros de libros hacen escombro y cubren las paredes. Empapelan los cuatro vidrios de una puerta cuatro cromos espeluznantes de un novelón por entregas. En la cueva hacen tertulia el gato, el can, el loro y el librero. Zaratustra, abichado y giboso -la cara de tocino rancio y la bufanda de verde serpiente- promueve con su caracterización de fantoche, una aguda y dolorosa disonancia muy emotiva y muy moderna. Encogido en el roto pelote de su silla enana, con los pies entrapados y cepones en la tarima del brasero, guarda la tienda. Un ratón saca el hocico intrigante por un agujero. --- # Readability ES Paragraphs for two classes Model based on the Roberta architecture finetuned on [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for readability assessment of Spanish texts. ## Description and performance This version of the model was trained on a mix of datasets, using paragraph-level granularity when possible. The model performs binary classification among the following classes: - Simple. - Complex. It achieves a F1 macro average score of 0.8891, measured on the validation set. ## Model variants - [`readability-es-sentences`](https://huggingface.co/hackathon-pln-es/readability-es-sentences). Two classes, sentence-based dataset. - `readability-es-paragraphs` (this model). Two classes, paragraph-based dataset. - [`readability-es-3class-sentences`](https://huggingface.co/hackathon-pln-es/readability-es-3class-sentences). Three classes, sentence-based dataset. - [`readability-es-3class-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-3class-paragraphs). Three classes, paragraph-based dataset. ## Datasets - [`readability-es-hackathon-pln-public`](https://huggingface.co/datasets/hackathon-pln-es/readability-es-hackathon-pln-public), composed of: * coh-metrix-esp corpus. * Various text resources scraped from websites. - Other non-public datasets: newsela-es, simplext. ## Training details Please, refer to [this training run](https://wandb.ai/readability-es/readability-es/runs/2z8080pi/overview) for full details on hyperparameters and training regime. ## Biases and Limitations - Due to the scarcity of data and the lack of a reliable gold test set, performance metrics are reported on the validation set. - One of the datasets involved is the Spanish version of newsela, which is frequently used as a reference. However, it was created by translating previous datasets, and therefore it may contain somewhat unnatural phrases. - Some of the datasets used cannot be publicly disseminated, making it more difficult to assess the existence of biases or mistakes. - Language might be biased towards the Spanish dialect spoken in Spain. Other regional variants might be sub-represented. - No effort has been performed to alleviate the shortcomings and biases described in the [original implementation of BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish#bias-examples-spanish). ## Authors - [Laura Vásquez-Rodríguez](https://lmvasque.github.io/) - [Pedro Cuenca](https://twitter.com/pcuenq) - [Sergio Morales](https://www.fireblend.com/) - [Fernando Alva-Manchego](https://feralvam.github.io/)
3,380
Stremie/roberta-base-clickbait-keywords
null
This model classifies whether a tweet is clickbait or not. It has been trained using [Webis-Clickbait-17](https://webis.de/data/webis-clickbait-17.html) dataset. Input is composed of 'postText' + '[SEP]' + 'targetKeywords'. Achieved ~0.7 F1-score on test data.
261
dpazmino/finetuning-sentiment-model_duke_final
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: finetuning-sentiment-model_duke_final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model_duke_final This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4776 - F1: 0.8708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
1,180
MartinoMensio/racism-models-regression-w-m-vote-epoch-4
[ "LABEL_0" ]
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `regression-w-m-vote-epoch-4` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline from transformers.pipelines import TextClassificationPipeline class TextRegressionPipeline(TextClassificationPipeline): """ Class based on the TextClassificationPipeline from transformers. The difference is that instead of being based on a classifier, it is based on a regressor. You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline. """ def __init__(self, **kwargs): """ Builds a new Pipeline based on regression. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold = kwargs.pop("regression_threshold", None) super().__init__(**kwargs) def __call__(self, *args, **kwargs): """ You can also specify the regression threshold when you call the pipeline. regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label. """ self.regression_threshold_call = kwargs.pop("regression_threshold", None) result = super().__call__(*args, **kwargs) return result def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False): outputs = model_outputs["logits"][0] outputs = outputs.numpy() scores = outputs score = scores[0] regression_threshold = self.regression_threshold # override the specific threshold if it is specified in the call if self.regression_threshold_call: regression_threshold = self.regression_threshold_call if regression_threshold: return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score} else: return {"score": score} model_name = 'regression-w-m-vote-epoch-4' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] # just get the score of regression print(pipe(texts)) # [{'score': 0.8345461}, {'score': 0.48615143}] # or also specify a threshold to cut racist/non-racist print(pipe(texts, regression_threshold=0.9)) # [{'label': 'non-racist', 'score': 0.8345461}, {'label': 'non-racist', 'score': 0.48615143}] ``` For more details, see https://github.com/preyero/neatclass22
6,364
liamcripwell/ctrl44-clf
[ "ignore", "rephrase", "syntax-split", "discourse-split" ]
--- language: en --- # CTRL44 Classification model This is a pretrained version of the 4-class simplification operation classifier presented in the NAACL 2022 paper "Controllable Sentence Simplification via Operation Classification". It was trained on the IRSD classification dataset. Predictions from this model can be used for input into the [simplification model](https://huggingface.co/liamcripwell/ctrl44-simp) to reproduce pipeline results seen in the paper. ## How to use Here is how to use this model in PyTorch: ```python from transformers import RobertaForSequenceClassification, AutoTokenizer model = RobertaForSequenceClassification.from_pretrained("liamcripwell/ctrl44-clf") tokenizer = AutoTokenizer.from_pretrained("liamcripwell/ctrl44-clf") text = "Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017." inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits predicted_class_id = logits.argmax().item() predicted_class_name = model.config.id2label[predicted_class_id] ```
1,112
Hate-speech-CNERG/hindi-codemixed-abusive-MuRIL
null
--- language: hi-en license: afl-3.0 --- This model is used detecting **abusive speech** in **Code-Mixed Hindi**. It is finetuned on MuRIL model using code-mixed hindi abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive ### For more details about our paper Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{das2022data, title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages}, author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2204.12543}, year={2022} } ~~~
982
Hate-speech-CNERG/malayalam-codemixed-abusive-MuRIL
null
--- language: ma-en license: afl-3.0 --- This model is used to detect **abusive speech** in **Code-Mixed Malayalam**. It is finetuned on MuRIL model using Code-Mixed Malayalam abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive ### For more details about our paper Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{das2022data, title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages}, author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2204.12543}, year={2022} } ~~~
990
Nithiwat/fake-news-debunker
[ "0", "1" ]
--- tags: autotrain language: en widget: - text: "Bill Gates wants to use mass Covid-19 vaccination campaign to implant microchips to track people" datasets: - Fake and real news datasets by CLÉMENT BISAILLON co2_eq_emissions: 4.415122243239347 --- # Model Trained Using AutoTrain - Problem: Fake News Classification - Problem type: Binary Classification - Model ID: 785124234 - CO2 Emissions (in grams): 4.415122243239347 ## Validation Metrics - Loss: 0.00012586714001372457 - Accuracy: 0.9998886538247411 - Precision: 1.0 - Recall: 0.9997665732959851 - AUC: 0.9999999999999999 - F1: 0.999883273024396 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Nithiwat/autotrain-fake-news-classifier-785124234 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Nithiwat/autotrain-fake-news-classifier-785124234", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Nithiwat/autotrain-fake-news-classifier-785124234", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,326
Sie-BERT/glue_sst_classifier
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - f1 - accuracy model-index: - name: glue_sst_classifier results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: F1 type: f1 value: 0.9033707865168539 - name: Accuracy type: accuracy value: 0.9013761467889908 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # glue_sst_classifier This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2359 - F1: 0.9034 - Accuracy: 0.9014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 | | 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 | | 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 | | 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 | | 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
1,993
cassiepowell/LaBSE-for-agreement
[ "0", "1", "2" ]
Entry not found
15
efederici/cross-encoder-distilbert-it
[ "LABEL_0" ]
--- pipeline_tag: text-classification license: apache-2.0 language: - it tags: - cross-encoder - sentence-similarity - transformers --- # Cross-Encoder The model can be used for Information Retrieval: given a query, encode the query will all possible passages. Then sort the passages in a decreasing order. <p align="center"> <img src="https://www.exibart.com/repository/media/2020/07/bridget-riley-cool-edge.jpg" width="400"> </br> Bridget Riley, COOL EDGE </p> ## Training Data This model was trained on a custom biomedical ranking dataset. ## Usage and Performance ```python from sentence_transformers import CrossEncoder model = CrossEncoder('efederici/cross-encoder-distilbert-it') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`.
904
charlieoneill/distilbert-base-uncased-finetuned-tweet_eval-offensive
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-tweet_eval-offensive results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: offensive metrics: - name: Accuracy type: accuracy value: 0.8089123867069486 - name: F1 type: f1 value: 0.8060281168230459 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-tweet_eval-offensive This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.4185 - Accuracy: 0.8089 - F1: 0.8060 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 187 | 0.4259 | 0.8059 | 0.7975 | | 0.46 | 2.0 | 374 | 0.4185 | 0.8089 | 0.8060 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1 - Datasets 2.1.0 - Tokenizers 0.12.1
1,851
LiYouYou/BERT_MRPC
[ "equivalent", "not_equivalent" ]
Entry not found
15
LiYouYou/bert_finetuning_cn
[ "negative", "positive" ]
--- language: - en tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert_finetuning_cn results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.8314220183486238 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_finetuning_cn This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.5440 - Accuracy: 0.8314 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
1,385
JoMart/albert-base-v2
null
--- tags: - generated_from_trainer model-index: - name: albert-base-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.024 | 1.0 | 4000 | 0.0300 | | 0.0049 | 2.0 | 8000 | 0.0075 | | 0.0 | 3.0 | 12000 | 0.0125 | | 0.0 | 4.0 | 16000 | 0.0101 | | 0.0056 | 5.0 | 20000 | 0.0104 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.0 - Datasets 2.1.0 - Tokenizers 0.12.1
1,366
LiYuan/Amazon-Cross-Encoder-Classification
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- license: afl-3.0 --- There are two types of Cross-Encoder models. One is the Cross-Encoder Regression model that we fine-tuned and mentioned in the previous section. Next, we have the Cross-Encoder Classification model. These two models are introduced in the same paper https://doi.org/10.48550/arxiv.1908.10084 Both models resolve the issue that the BERT model is too time-consuming and resource-consuming to train in pairwised sentences. These two model weights are initialized as the BERT and RoBERTa networks. We only need to fine-tune them, spending much less time to yield a comparable or even better sentence embedding. The below figure \ref{figure:5} shows the architecture of Cross-Encoder Classification. ![](1.png) Then we evaluated the model performance on the 2,000 held-out test set. We also got a test accuracy **46.05%** that is almost identical to the best validation accuracy, suggesting a good generalization model.
945
Jeevesh8/bert_ft_cola-67
null
Entry not found
15
JoanTirant/roberta-base-bne-finetuned-amazon_reviews_multi
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-amazon_reviews_multi results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.93425 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2291 - Accuracy: 0.9343 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1909 | 1.0 | 1250 | 0.1784 | 0.9295 | | 0.1013 | 2.0 | 2500 | 0.2291 | 0.9343 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
1,754
CleveGreen/JobClassifier_v3_gpt
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_100", "LABEL_101", "LABEL_102", "LABEL_103", "LABEL_104", "LABEL_105", "LABEL_106", "LABEL_107", "LABEL_108", "LABEL_109", "LABEL_11", "LABEL_110", "LABEL_111", "LABEL_112", "LABEL_113", "LABEL_114", "LABEL_115", "LABEL_116", "LABEL_...
Entry not found
15
nikitast/lang-classifier-roberta
[ "az", "be", "de", "en", "he", "hy", "ka", "kk", "ru", "uk" ]
--- language: - ru - uk - be - kk - az - hy - ka - he - en - de tags: - language classification datasets: - open_subtitles - tatoeba - oscar --- # RoBERTa for Single Language Classification ## Training RoBERTa fine-tuned on small parts of Open Subtitles, Oscar and Tatoeba datasets (~9k samples per language). | data source | language | |-----------------|----------------| | open_subtitles | ka, he, en, de | | oscar | be, kk, az, hu | | tatoeba | ru, uk | ## Validation The metrics obtained from validation on the another part of dataset (~1k samples per language). |index|class|f1-score|precision|recall|support| |---|---|---|---|---|---| |0|az|0\.998|0\.997|1\.0|997| |1|be|0\.996|0\.998|0\.994|1004| |2|de|0\.976|0\.966|0\.987|979| |3|en|0\.976|0\.986|0\.967|1020| |4|he|1\.0|1\.0|0\.999|1001| |5|hy|0\.994|0\.991|0\.998|993| |6|ka|0\.999|0\.999|0\.999|1000| |7|kk|0\.996|0\.998|0\.993|1005| |8|uk|0\.982|0\.997|0\.968|1030| |9|ru|0\.982|0\.968|0\.997|971| |10|macro\_avg|0\.99|0\.99|0\.99|10000| |11|weighted avg|0\.99|0\.99|0\.99|10000|
1,086
nikitast/multilang-classifier-roberta
[ "az", "be", "de", "en", "he", "hy", "ka", "kk", "ru", "uk" ]
--- language: - ru - uk - be - kk - az - hy - ka - he - en - de tags: - language classification datasets: - open_subtitles - tatoeba - oscar --- # RoBERTa for Multilabel Language Classification ## Training RoBERTa fine-tuned on small parts of Open Subtitles, Oscar and Tatoeba datasets (~9k samples per language). Implemented heuristic algorithm for multilingual training data creation - https://github.com/n1kstep/lang-classifier | data source | language | |-----------------|----------------| | open_subtitles | ka, he, en, de | | oscar | be, kk, az, hu | | tatoeba | ru, uk | ## Validation The metrics obtained from validation on the another part of dataset (~1k samples per language). | Training Loss | Validation Loss | F1-Score | Roc Auc | Accuracy | Support | |---------------|-----------------|----------|----------|----------|---------| | 0.161500 | 0.110949 | 0.947844 | 0.953939 | 0.762063 | 26858 |
969
Jeevesh8/6ep_bert_ft_cola-50
null
Entry not found
15
Jeevesh8/6ep_bert_ft_cola-54
null
Entry not found
15
Jeevesh8/6ep_bert_ft_cola-68
null
Entry not found
15
Jeevesh8/6ep_bert_ft_cola-69
null
Entry not found
15
Jeevesh8/6ep_bert_ft_cola-71
null
Entry not found
15
Jeevesh8/6ep_bert_ft_cola-75
null
Entry not found
15
YeRyeongLee/mental-bert-base-uncased-masked_finetuned-0517
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: mental-bert-base-uncased-masked_finetuned-0517 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mental-bert-base-uncased-masked_finetuned-0517 This model is a fine-tuned version of [mental/mental-bert-base-uncased](https://huggingface.co/mental/mental-bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5217 - Accuracy: 0.917 - F1: 0.9171 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | No log | 1.0 | 3000 | 0.2922 | 0.8993 | 0.8997 | | No log | 2.0 | 6000 | 0.3964 | 0.9063 | 0.9069 | | No log | 3.0 | 9000 | 0.4456 | 0.9197 | 0.9197 | | No log | 4.0 | 12000 | 0.5217 | 0.917 | 0.9171 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
1,649
Jeevesh8/512seq_len_6ep_bert_ft_cola-68
null
Entry not found
15
Jeevesh8/512seq_len_6ep_bert_ft_cola-75
null
Entry not found
15
Jeevesh8/512seq_len_6ep_bert_ft_cola-78
null
Entry not found
15
connectivity/feather_berts_42
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
connectivity/bert_ft_qqp-10
null
Entry not found
15
connectivity/feather_berts_0
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
reemalyami/AraRoBERTa_Poem_classification
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
Entry not found
15
sahn/distilbert-base-uncased-finetuned-imdb
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9294 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2214 - Accuracy: 0.9294 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2435 | 1.0 | 1250 | 0.2186 | 0.917 | | 0.1495 | 2.0 | 2500 | 0.2214 | 0.9294 | | 0.0829 | 3.0 | 3750 | 0.4892 | 0.8918 | | 0.0472 | 4.0 | 5000 | 0.5189 | 0.8976 | | 0.0268 | 5.0 | 6250 | 0.5478 | 0.8996 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1,861
Paoloant/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
Entry not found
15
Jeevesh8/lecun_feather_berts-34
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/lecun_feather_berts-67
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/lecun_feather_berts-6
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/lecun_feather_berts-18
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/lecun_feather_berts-12
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/lecun_feather_berts-10
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/lecun_feather_berts-7
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/lecun_feather_berts-16
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/std_pnt_04_feather_berts-10
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/std_pnt_04_feather_berts-21
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/std_pnt_04_feather_berts-9
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/std_pnt_04_feather_berts-22
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
EventMiner/xlm-roberta-large-en-pt-es-doc
null
--- language: multilingual tags: - news event detection - document level - EventMiner license: apache-2.0 --- # EventMiner EventMiner is designed for multilingual news event detection. The goal of news event detection is the automatic extraction of event details from news articles. This event extraction can be done at different levels: document, sentence and word ranging from coarse-granular information to fine-granular information. We submitted the best results based on EventMiner to [CASE 2021 shared task 1: *Multilingual Protest News Detection*](https://competitions.codalab.org/competitions/31247). Our approach won first place in English for the document level task while ranking within the top four solutions for other languages: Portuguese, Spanish, and Hindi. *EventMiner/xlm-roberta-large-en-pt-es-doc* is a xlm-roberta-large sequence classification model fine-tuned on English, Portuguese and Spanish document level data of the multilingual version of GLOCON gold standard dataset released with [CASE 2021](https://aclanthology.org/2021.case-1.11/). <br> Labels: - Label_0: News article does not contain information about a past or ongoing socio-political event - Label_1: News article contains information about a past or ongoing socio-political event More details about the training procedure are available with our [codebase](https://github.com/HHansi/EventMiner). # How to Use ## Load Model ```python from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification model_name = 'EventMiner/xlm-roberta-large-en-pt-es-doc' tokenizer = XLMRobertaTokenizer.from_pretrained(model_name) model = XLMRobertaForSequenceClassification.from_pretrained(model_name) ``` ## Classification ```python from transformers import pipeline classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) classifier("Police arrested five more student leaders on Monday when implementing the strike call given by MSU students union as a mark of protest against the decision to introduce payment seats in first-year commerce programme.") ``` # Citation If you use this model, please consider citing the following paper. ``` @inproceedings{hettiarachchi-etal-2021-daai, title = "{DAAI} at {CASE} 2021 Task 1: Transformer-based Multilingual Socio-political and Crisis Event Detection", author = "Hettiarachchi, Hansi and Adedoyin-Olowe, Mariam and Bhogal, Jagdev and Gaber, Mohamed Medhat", booktitle = "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.case-1.16", doi = "10.18653/v1/2021.case-1.16", pages = "120--130", } ```
2,857
S2312dal/M1_cross
[ "LABEL_0" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - spearmanr model-index: - name: M1_cross results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # M1_cross This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0066 - Pearson: 0.9828 - Spearmanr: 0.9147 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 25 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 125.0 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 0.0294 | 1.0 | 131 | 0.0457 | 0.8770 | 0.8351 | | 0.0237 | 2.0 | 262 | 0.0302 | 0.9335 | 0.8939 | | 0.015 | 3.0 | 393 | 0.0155 | 0.9594 | 0.9054 | | 0.0177 | 4.0 | 524 | 0.0106 | 0.9778 | 0.9091 | | 0.0087 | 5.0 | 655 | 0.0066 | 0.9828 | 0.9147 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,683
S2312dal/M6_cross
[ "LABEL_0" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - spearmanr model-index: - name: M6_cross results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # M6_cross This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0084 - Pearson: 0.9811 - Spearmanr: 0.9075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 20 - eval_batch_size: 20 - seed: 25 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 6.0 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 0.0059 | 1.0 | 105 | 0.0158 | 0.9633 | 0.9054 | | 0.001 | 2.0 | 210 | 0.0102 | 0.9770 | 0.9103 | | 0.0008 | 3.0 | 315 | 0.0083 | 0.9805 | 0.9052 | | 0.0011 | 4.0 | 420 | 0.0075 | 0.9812 | 0.9082 | | 0.0017 | 5.0 | 525 | 0.0084 | 0.9811 | 0.9075 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,727
valurank/finetuned-distilbert-adult-content-detection
null
--- license: other tags: - generated_from_trainer model-index: - name: finetuned-distilbert-adult-content-detection results: [] --- ### finetuned-distilbert-news-article-catgorization This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the adult_content dataset. It achieves the following results on the evaluation set: - Loss: 0.0065 - F1_score(weighted): 0.90 ### Model description More information needed ### Intended uses & limitations More information needed ### Training and evaluation data The model was trained on some subset of the adult_content dataset and it was validated on the remaining subset of the data ### Training procedure More information needed ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-5 - train_batch_size: 5 - eval_batch_size: 5 - seed: 17 - optimizer: AdamW(lr=1e-5 and epsilon=1e-08) - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0 - num_epochs: 2 ### Training results | Training Loss | Epoch | Validation Loss | f1 score | |:-------------:|:-----:|:---------------: |:------:| | 0.1414 | 1.0 | 0.4585 | 0.9058 | | 0.1410 | 2.0 | 0.4584 | 0.9058 |
1,259
linuxcoder/distilbert-base-uncased-finetuned-emotion
[ "sadness", "joy", "love", "anger", "fear", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.924047984825329 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2294 - Accuracy: 0.924 - F1: 0.9240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3316 | 0.9025 | 0.8985 | | No log | 2.0 | 500 | 0.2294 | 0.924 | 0.9240 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,803
dunlp/GWW-finetuned-cola
null
--- tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: GWW-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.16962352015480656 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GWW-finetuned-cola This model is a fine-tuned version of [dunlp/GWW](https://huggingface.co/dunlp/GWW) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6609 - Matthews Correlation: 0.1696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6181 | 1.0 | 535 | 0.6585 | 0.0 | | 0.5938 | 2.0 | 1070 | 0.6276 | 0.0511 | | 0.5241 | 3.0 | 1605 | 0.6609 | 0.1696 | | 0.4433 | 4.0 | 2140 | 0.8239 | 0.1432 | | 0.3492 | 5.0 | 2675 | 0.9236 | 0.1351 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,912
Sayan01/tiny-bert-mnli-distilled
[ "contradiction", "entailment", "neutral" ]
Entry not found
15
Elron/deberta-v3-large-emotion
[ "0", "1", "2", "3" ]
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large results: [] --- # deberta-v3-large-sentiment This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset. ## Model description Test set results: | Model | Emotion | Hate | Irony | Offensive | Sentiment | | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | | deberta-v3-large | **86.3** | **61.3** | **87.1** | **86.4** | **73.9** | | BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 | | RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 | [source:papers_with_code](https://paperswithcode.com/sota/sentiment-analysis-on-tweeteval) ## Intended uses & limitations Classifying attributes of interest on tweeter like data. ## Training and evaluation data [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset. ## Training procedure Fine tuned and evaluated with [run_glue.py]() ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 10.0 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2787 | 0.49 | 100 | 1.1127 | 0.4866 | | 1.089 | 0.98 | 200 | 0.9668 | 0.7139 | | 0.9134 | 1.47 | 300 | 0.8720 | 0.7834 | | 0.8618 | 1.96 | 400 | 0.7726 | 0.7941 | | 0.686 | 2.45 | 500 | 0.7337 | 0.8209 | | 0.6333 | 2.94 | 600 | 0.7350 | 0.8235 | | 0.5765 | 3.43 | 700 | 0.7561 | 0.8235 | | 0.5502 | 3.92 | 800 | 0.7273 | 0.8476 | | 0.5049 | 4.41 | 900 | 0.8137 | 0.8102 | | 0.4695 | 4.9 | 1000 | 0.7581 | 0.8289 | | 0.4657 | 5.39 | 1100 | 0.8404 | 0.8048 | | 0.4549 | 5.88 | 1200 | 0.7800 | 0.8369 | | 0.4305 | 6.37 | 1300 | 0.8575 | 0.8235 | | 0.4209 | 6.86 | 1400 | 0.8572 | 0.8102 | | 0.3983 | 7.35 | 1500 | 0.8392 | 0.8316 | | 0.4139 | 7.84 | 1600 | 0.8152 | 0.8209 | | 0.393 | 8.33 | 1700 | 0.8261 | 0.8289 | | 0.3979 | 8.82 | 1800 | 0.8328 | 0.8235 | | 0.3928 | 9.31 | 1900 | 0.8364 | 0.8209 | | 0.3848 | 9.8 | 2000 | 0.8322 | 0.8235 | ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.9.0 - Datasets 2.2.2 - Tokenizers 0.11.6
3,130
Smith123/tiny-bert-sst2-distilled
[ "negative", "positive" ]
Entry not found
15
climabench/miniLM-cdp-all
[ "LABEL_0" ]
Entry not found
15
ZeyadAhmed/AraElectra-Arabic-SQuADv2-CLS
null
Entry not found
15
eus/testes
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
Entry not found
15
PGT/nystromformer-artificial-balanced-max500-490000-1
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6" ]
Entry not found
15
jinwooChoi/SKKU_AP_SA_KBT2
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
codeparrot/codeparrot-small-complexity-prediction
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6" ]
--- license: apache-2.0 --- This is a fine-tuned version of [codeparrot-small-multi](https://huggingface.co/codeparrot/codeparrot-small-multi), a 110M multilingual model for code generation, on [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex), a dataset for complexity prediction of Java code.
314
jinwooChoi/SKKU_AP_SA_KES_trained1
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
jinwooChoi/SKKU_SA_KEB
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
Entry not found
15
CLTL/icf-levels-att
[ "LABEL_0" ]
--- language: nl license: mit pipeline_tag: text-classification inference: false --- # Regression Model for Attention Functioning Levels (ICF b140) ## Description A fine-tuned regression model that assigns a functioning level to Dutch sentences describing attention functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about attention functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model. ## Functioning levels Level | Meaning ---|--- 4 | No problem with concentrating / directing / holding / dividing attention. 3 | Slight problem with concentrating / directing / holding / dividing attention for a longer period of time or for complex tasks. 2 | Can concentrate / direct / hold / divide attention only for a short time. 1 | Can barely concentrate / direct / hold / divide attention. 0 | Unable to concentrate / direct / hold / divide attention. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. ## Intended uses and limitations - The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records). - The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled. ## How to use To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library: ``` from simpletransformers.classification import ClassificationModel model = ClassificationModel( 'roberta', 'CLTL/icf-levels-att', use_cuda=False, ) example = 'Snel afgeleid, moeite aandacht te behouden.' _, raw_outputs = model.predict([example]) predictions = np.squeeze(raw_outputs) ``` The prediction on the example is: ``` 2.89 ``` The raw outputs look like this: ``` [[2.89226103]] ``` ## Training data - The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released. - The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines). ## Training procedure The default training parameters of Simple Transformers were used, including: - Optimizer: AdamW - Learning rate: 4e-5 - Num train epochs: 1 - Train batch size: 8 ## Evaluation results The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). | | Sentence-level | Note-level |---|---|--- mean absolute error | 0.99 | 1.03 mean squared error | 1.35 | 1.47 root mean squared error | 1.16 | 1.21 ## Authors and references ### Authors Jenia Kim, Piek Vossen ### References TBD
3,270
CenIA/albert-tiny-spanish-finetuned-pawsx
null
Entry not found
15
DSI/human-directed-sentiment
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
** Human-Directed Sentiment Analysis in Arabic A supervised training procedure to classify human-directed-sentiment in a text. We define the human-directed-sentiment as the polarity of one user towards a second person who is involved with him in a discussion.
260
DSI/personal_sentiment
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
Entry not found
15
DeadBeast/korscm-mBERT
null
--- language: korean license: apache-2.0 datasets: - Korean-Sarcasm --- # **Korean-mBERT** This model is a fine-tune checkpoint of mBERT-base-cased over **Hugging Face Kore_Scm** dataset for Text classification. ### **How to use?** **Task**: binary-classification - LABEL_1: Sarcasm (*Sarcasm means tweets contains sarcasm*) - LABEL_0: Not Sarcasm (*Not Sarcasm means tweets do not contain sarcasm*) Click on **Use in Transformers**!
440
EthanChen0418/intent_cls
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
Entry not found
15