modelId
stringlengths
6
107
label
list
readme
stringlengths
0
56.2k
readme_len
int64
0
56.2k
bergum/xtremedistil-l6-h384-emotion
[ "sadness", "joy", "love", "anger", "fear", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: xtremedistil-l6-h384-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.928 --- # xtremedistil-l6-h384-emotion This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Accuracy: 0.928 This model can be quantized to int8 and retain accuracy - Accuracy 0.912 <pre> import transformers import transformers.convert_graph_to_onnx as onnx_convert from pathlib import Path pipeline = transformers.pipeline("text-classification",model=model,tokenizer=tokenizer) onnx_convert.convert_pytorch(pipeline, opset=11, output=Path("xtremedistil-l6-h384-emotion.onnx"), use_external_format=False) from onnxruntime.quantization import quantize_dynamic, QuantType quantize_dynamic("xtremedistil-l6-h384-emotion.onnx", "xtremedistil-l6-h384-emotion-int8.onnx", weight_type=QuantType.QUInt8) </pre> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - num_epochs: 14 ### Training results <pre> Epoch Training Loss Validation Loss Accuracy 1 No log 0.960511 0.689000 2 No log 0.620671 0.824000 3 No log 0.435741 0.880000 4 0.797900 0.341771 0.896000 5 0.797900 0.294780 0.916000 6 0.797900 0.250572 0.918000 7 0.797900 0.232976 0.924000 8 0.277300 0.216347 0.924000 9 0.277300 0.202306 0.930500 10 0.277300 0.192530 0.930000 11 0.277300 0.192500 0.926500 12 0.181700 0.187347 0.928500 13 0.181700 0.185896 0.929500 14 0.181700 0.185154 0.928000 </pre>
1,936
bertin-project/bertin-base-xnli-es
[ "entailment", "neutral", "contradiction" ]
--- language: es license: cc-by-4.0 tags: - spanish - roberta - xnli --- This checkpoint has been trained for the XNLI dataset. This checkpoint was created from **Bertin Gaussian 512**, which is a **RoBERTa-base** model trained from scratch in Spanish. Information on this base model may be found at [its own card](https://huggingface.co/bertin-project/bertin-base-gaussian-exp-512seqlen) and at deeper detail on [the main project card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish). The training dataset for the base model is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is biased towards average perplexity values (using a Gaussian function), discarding more often documents with very large values (poor quality) of very small values (short, repetitive texts). This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Team members - Eduardo González ([edugp](https://huggingface.co/edugp)) - Javier de la Rosa ([versae](https://huggingface.co/versae)) - Manu Romero ([mrm8488](https://huggingface.co/)) - María Grandury ([mariagrandury](https://huggingface.co/)) - Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps)) - Paulo Villegas ([paulo](https://huggingface.co/paulo))
1,501
bgoel4132/tweet-disaster-classifier
[ "accident", "cyclone", "earthquake", "explosion", "fire", "flood", "hurricane", "medical", "other", "pollution", "tornado", "typhoon", "volcano" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - bgoel4132/autonlp-data-tweet-disaster-classifier co2_eq_emissions: 27.22397099134103 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 28716412 - CO2 Emissions (in grams): 27.22397099134103 ## Validation Metrics - Loss: 0.4146720767021179 - Accuracy: 0.8066924731182795 - Macro F1: 0.7835463282531184 - Micro F1: 0.8066924731182795 - Weighted F1: 0.7974252447208724 - Macro Precision: 0.8183917344767431 - Micro Precision: 0.8066924731182795 - Weighted Precision: 0.8005510296861892 - Macro Recall: 0.7679676081852519 - Micro Recall: 0.8066924731182795 - Weighted Recall: 0.8066924731182795 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bgoel4132/autonlp-tweet-disaster-classifier-28716412 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bgoel4132/autonlp-tweet-disaster-classifier-28716412", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bgoel4132/autonlp-tweet-disaster-classifier-28716412", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,436
carlosaguayo/distilbert-base-uncased-finetuned-emotion
[ "sadness", "joy", "love", "anger", "fear", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9295 - name: F1 type: f1 value: 0.9299984897610097 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1689 - Accuracy: 0.9295 - F1: 0.9300 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2853 | 1.0 | 250 | 0.1975 | 0.9235 | 0.9233 | | 0.1568 | 2.0 | 500 | 0.1689 | 0.9295 | 0.9300 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
1,807
celtics1863/env-bert-topic
[ "生态环境", "水污染", "野生动物保护", "太阳能", "环保经济", "污水处理", "绿色建筑", "水处理", "噪音污染", "温室效应", "净水设备", "净水器", "自来水", "生活", "环境评估", "空气污染", "环境评价", "工业污染", "雾霾", "植树", "环保行业", "水处理工程", "沙漠治理", "巴黎协定", "核能", "噪音", "环评工程师", "二氧化碳", "低碳", "自然环境", "沙尘暴", "环境工程", "秸秆焚烧", ...
--- language: zh widget: - text: "美国退出《巴黎协定》" - text: "污水处理厂中的功耗需要减少" tags: - pretrain - pytorch - environment - classification - topic classification --- 话题分类模型,使用某乎"环境"话题下所有子话题,过滤后得69类。 top1 acc 60.7, top3 acc 81.6, 可以用于中文环境文本挖掘的预处理步骤。 标签: "生态环境","水污染", "野生动物保护", "太阳能", "环保经济", "污水处理", "绿色建筑", "水处理", "噪音污染", "温室效应", "净水设备", "净水器", "自来水", "生活", "环境评估", "空气污染", "环境评价", "工业污染", "雾霾", "植树", "环保行业", "水处理工程", "沙漠治理", "巴黎协定", "核能", "噪音", "环评工程师", "二氧化碳", "低碳", "自然环境", "沙尘暴", "环境工程", "秸秆焚烧", "PM 2.5", "太空垃圾", "穹顶之下(纪录片)", "垃圾", "环境科学", "净水", "污水排放", "室内空气污染", "环境污染", "全球变暖", "邻居噪音", "土壤污染", "生物多样性", "碳交易", "污染治理", "雾霾治理", "碳金融", "建筑节能", "风能及风力发电", "温室气体", "环境保护", "碳排放", "垃圾处理器", "气候变化", "化学污染", "地球一小时", "环保组织", "物种多样性", "节能减排", "核污染", "环保督查", "垃圾处理", "垃圾分类", "重金属污染", "环境伦理学", "垃圾焚烧"
812
coppercitylabs/uzbek-news-category-classifier
[ "дунё", "жамият", "жиноят", "иқтисодиёт", "маданият", "реклама", "саломатлик", "сиёсат", "спорт", "фан ва техника", "шоу-бизнес" ]
--- language: uz tags: - uzbek - cyrillic - news category classifier license: mit datasets: - webcrawl --- # Uzbek news category classifier (based on UzBERT) UzBERT fine-tuned to classify news articles into one of the following categories: - дунё - жамият - жиноят - иқтисодиёт - маданият - реклама - саломатлик - сиёсат - спорт - фан ва техника - шоу-бизнес ## How to use ```python >>> from transformers import pipeline >>> classifier = pipeline('text-classification', model='coppercitylabs/uzbek-news-category-classifier') >>> text = """Маҳоратли пара-енгил атлетикачимиз Ҳусниддин Норбеков Токио-2020 Паралимпия ўйинларида ғалаба қозониб, делегациямиз ҳисобига навбатдаги олтин медални келтирди. Бу ҳақда МОҚ хабар берди. Норбеков ҳозиргина ядро улоқтириш дастурида ўз ғалабасини тантана қилди. Ушбу машқда вакилимиз 16:13 метр натижа билан энг яхши кўрсаткични қайд этди. Шу тариқа, делегациямиз ҳисобидаги медаллар сони 16 (6 та олтин, 4 та кумуш ва 6 та бронза) тага етди. Кейинги кун дастурларида иштирок этадиган ҳамюртларимизга омад тилаб қоламиз!""" >>> classifier(text) [{'label': 'спорт', 'score': 0.9865401983261108}] ``` ## Fine-tuning data Fine-tuned on ~60K news articles for 3 epochs.
1,211
emrecan/bert-base-multilingual-cased-multinli_tr
[ "contradiction", "entailment", "neutral" ]
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: apache-2.0 datasets: - nli_tr widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" ---
332
enod/esg-bert
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "...
Entry not found
15
jpwahle/longformer-base-plagiarism-detection
null
--- language: ISO 639-1 code for your language, or `multilingual` thumbnail: url to a thumbnail used in social sharing tags: - array - of - tags datasets: - array of dataset identifiers metrics: - array of metric identifiers widget: - text: Plagiarism is the representation of another author's writing, thoughts, ideas, or expressions as one's own work. --- # Longformer-base for Word Sense Disambiguation This is the checkpoint for Longformer-base after being trained on the [Machine-Paraphrased Plagiarism Dataset](https://doi.org/10.5281/zenodo.3608000) Additional information about this model: * [The longformer-base-4096 model page](https://huggingface.co/allenai/longformer-base-4096) * [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) * [Official implementation by AllenAI](https://github.com/allenai/longformer) The model can be loaded to perform Plagiarism like so: ```py from transformers import AutoModelForSequenceClassification, AutoTokenizer AutoModelForSequenceClassification("jpelhaw/longformer-base-plagiarism-detection") AutoTokenizer.from_pretrained("jpelhaw/longformer-base-plagiarism-detection") input = 'Plagiarism is the representation of another author's writing, thoughts, ideas, or expressions as one's own work.' example = tokenizer.tokenize(input, add_special_tokens=True) answer = model(**example) # "plagiarised" ```
1,427
philschmid/deberta-v3-xsmall-emotion
[ "anger", "fear", "joy", "love", "sadness", "surprise" ]
--- license: mit tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: deberta-v3-xsmall-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.932 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-xsmall-emotion This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1877 - Accuracy: 0.932 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3683 | 1.0 | 500 | 0.8479 | 0.6975 | | 0.547 | 2.0 | 1000 | 0.2881 | 0.905 | | 0.2378 | 3.0 | 1500 | 0.2116 | 0.925 | | 0.1704 | 4.0 | 2000 | 0.1877 | 0.932 | | 0.1392 | 5.0 | 2500 | 0.1718 | 0.9295 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
1,909
razent/SciFive-large-PMC
null
--- language: - en tags: - token-classification - text-classification - question-answering - text2text-generation - text-generation datasets: - pmc/open_access --- # SciFive PMC Large ## Introduction Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598) Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_ ## How to use For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-PMC") model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-PMC") ​ sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ." text = "ncbi_ner: " + sentence + " </s>" encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
1,377
spencerh/leftcenterpartisan
null
Entry not found
15
verloop/Hinglish-Bert-Class
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jorgeutd/bert-base-uncased-finetuned-surveyclassification
[ "negative", "neutral", "positive" ]
--- license: apache-2.0 tags: - generated_from_trainer language: en widget: - text: "The agent on the phone was very helpful and nice to me." metrics: - accuracy - f1 model-index: - name: bert-base-uncased-finetuned-surveyclassification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-surveyclassification This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on a custom survey dataset. It achieves the following results on the evaluation set: - Loss: 0.2818 - Accuracy: 0.9097 - F1: 0.9097 ## Model description More information needed #### Limitations and bias This model is limited by its training dataset of survey results for a particular customer service domain. This may not generalize well for all use cases in different domains. #### How to use You can use this model with Transformers *pipeline* for Text Classification. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/bert-base-uncased-finetuned-surveyclassification") model = AutoModelForSequenceClassification.from_pretrained("Jorgeutd/bert-base-uncased-finetuned-surveyclassification") text_classifier = pipeline("text-classification", model=model,tokenizer=tokenizer, device=0) example = "The agent on the phone was very helpful and nice to me." results = text_classifier(example) print(results) ``` ## Training and evaluation data Custom survey dataset. ## Training procedure SageMaker notebook instance. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4136 | 1.0 | 902 | 0.2818 | 0.9097 | 0.9097 | | 0.2213 | 2.0 | 1804 | 0.2990 | 0.9077 | 0.9077 | | 0.1548 | 3.0 | 2706 | 0.3507 | 0.9026 | 0.9026 | | 0.1034 | 4.0 | 3608 | 0.4692 | 0.9011 | 0.9011 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.8.1+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
2,616
AmrSheta/Meme
null
--- tags: - text-classification --- #meme description classification
69
RobertoMCA97/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9255 - name: F1 type: f1 value: 0.9257511693451751 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2157 - Accuracy: 0.9255 - F1: 0.9258 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8145 | 1.0 | 250 | 0.3093 | 0.91 | 0.9081 | | 0.2461 | 2.0 | 500 | 0.2157 | 0.9255 | 0.9258 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
1,807
navteca/nli-deberta-v3-xsmall
[ "contradiction", "entailment", "neutral" ]
--- datasets: - multi_nli - snli language: en license: apache-2.0 metrics: - accuracy pipeline_tag: zero-shot-classification tags: - microsoft/deberta-v3-xsmall --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 91.64 - Accuracy on MNLI mismatched set: 87.77 For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-v3-xsmall') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-xsmall') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-xsmall') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-xsmall') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
2,788
timpal0l/xlm-roberta-base-faq-extractor
null
--- license: apache-2.0 --- # xlm-roberta-base-faq-extractor
65
Denzil/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9239207626877816 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2169 - Accuracy: 0.924 - F1: 0.9239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8101 | 1.0 | 250 | 0.3068 | 0.905 | 0.9019 | | 0.2456 | 2.0 | 500 | 0.2169 | 0.924 | 0.9239 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,805
BramVanroy/gbert-base-finetuned-cefr
[ "A1", "A2", "B1", "B2", "C1" ]
--- language: - de license: mit tags: - cefr - proficiency assessment - written text datasets: - merlin - disko metrics: - {metric_0} # Example: wer. Use metric id from https://hf.co/metrics model-index: - name: gbert-base-finetuned-cefr results: - task: type: text-classification name: CEFR proficiency prediction metrics: - type: accuracy value: 0.8297872340425532 - type: f1 value: 0.831662518023171 - type: precision value: 0.8379770347855454 - type: qwk value: 0.9497893050032643 - type: recall value: 0.8297872340425532 widget: - text: "Samstag der 13. Februar Hallo ! Ich habe eine Fragen . Ich habe Probleme hören “ eu ” und “ cht ” . Wie sage ich “ also ” und “ to bake ” auf Deutsche ? Ich bin nicht gut aber ich lerne . Ich studiere Kunstgeschichte . Ich liebe Kunst und Geschichte . Mathamatik und Deutsche ich schierig aber nützlich . Es regnet heute . Die Woche ist interessant ." - text: "Lieber . Ingo . Wie gehts es Ich will 3 Zimmer Wohnung Mieten . Ich kann nicht so viel Miete bezahlen Ich hab kein Geld . Ich muss eine wohnung Mieten . Viel Danke - Maria" - text: "Hallo Liebe Daniela , ich möchte am Samstag um 15.00 Uhr im Schwimmbad gehen . In Stadt X ist ein neue Schwimmbad und ich möchte da gehen . _ Diese Schwimmbad ist so groß und sehr schön . Möchtest du mit mir gehen ? Weiß du dass ich liebe schwimmen , aber zusammen ist besser . Nimm bitte ein Tüch , speciall Schuhe , ein Schampoo und etwas zu trinken . Ruft mir an oder schreibt wenn möchtest du gehen mit mir . Mit freundlichen Grüße Julia" ---
1,627
amir36/distilbert-base-uncased-finetuned-emotion
[ "sadness", "joy", "love", "anger", "fear", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.921 - name: F1 type: f1 value: 0.920970510317642 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2180 - Accuracy: 0.921 - F1: 0.9210 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8133 | 1.0 | 250 | 0.3078 | 0.9095 | 0.9076 | | 0.2431 | 2.0 | 500 | 0.2180 | 0.921 | 0.9210 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.4 - Tokenizers 0.11.6
1,804
Intel/xlnet-base-cased-mrpc-int8-static
[ "0", "1" ]
--- language: - en license: mit tags: - text-classfication - int8 - Intel® Neural Compressor - PostTrainingStatic datasets: - glue metrics: - f1 model-index: - name: xlnet-base-cased-mrpc-int8-static results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: F1 type: f1 value: 0.8892794376098417 --- # INT8 xlnet-base-cased-mrpc ### Post-training static quantization This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [xlnet-base-cased-mrpc](https://huggingface.co/Intel/xlnet-base-cased-mrpc). The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304. ### Test result | |INT8|FP32| |---|:---:|:---:| | **Accuracy (eval-f1)** |0.8893|0.8897| | **Model size (MB)** |215|448| ### Load with Intel® Neural Compressor: ```python from neural_compressor.utils.load_huggingface import OptimizedModel int8_model = OptimizedModel.from_pretrained( 'Intel/xlnet-base-cased-mrpc-int8-static', ) ```
1,270
BSlinky/finetuning-sentiment-model-3000-samples
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
1,102
davidenam/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9205 - name: F1 type: f1 value: 0.9203318889648883 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2230 - Accuracy: 0.9205 - F1: 0.9203 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3224 | 0.9055 | 0.9034 | | No log | 2.0 | 500 | 0.2230 | 0.9205 | 0.9203 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cpu - Datasets 2.1.0 - Tokenizers 0.12.1
1,804
cassiepowell/msmarco-RoBERTa-for-similarity
[ "LABEL_0" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_42
[ "0", "1", "2", "3", "4" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_66
[ "0", "1" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_66
[ "0", "1", "2" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_66
[ "0", "1", "2", "3", "4" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_77
[ "0", "1" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_77
[ "0", "1", "2" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_88
[ "0", "1" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_88
[ "0", "1", "2" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_99
[ "0", "1" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_99
[ "0", "1", "2" ]
Entry not found
15
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_99
[ "0", "1", "2", "3", "4" ]
Entry not found
15
leonweber/semantic_relations
[ "PREVENT", "SIDE_EFF", "TREAT_FOR_DIS" ]
Entry not found
15
FrGes/xlm-roberta-large-finetuned-EUJAV-datasetA
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Fine-tuned model based on #XLM-RoBERTa (large-sized model) Data for finetuning: Italian vaccine stance data: 781 training tweets and 281 evaluation tweets #BibTeX entry and citation info to be added
205
imohammad12/GRS-complex-simple-classifier-DeBerta
null
--- language: en tags: grs --- ## Citation Please star the [GRS GitHub repo](https://github.com/imohammad12/GRS) and cite the paper if you found our model useful: ``` @inproceedings{dehghan-etal-2022-grs, title = "{GRS}: Combining Generation and Revision in Unsupervised Sentence Simplification", author = "Dehghan, Mohammad and Kumar, Dhruv and Golab, Lukasz", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.77", pages = "949--960", abstract = "We propose GRS: an unsupervised approach to sentence simplification that combines text generation and text revision. We start with an iterative framework in which an input sentence is revised using explicit edit operations, and add paraphrasing as a new edit operation. This allows us to combine the advantages of generative and revision-based approaches: paraphrasing captures complex edit operations, and the use of explicit edit operations in an iterative manner provides controllability and interpretability. We demonstrate these advantages of GRS compared to existing methods on the Newsela and ASSET datasets.", } ```
1,325
connectivity/bert_ft_qqp-9
null
Entry not found
15
connectivity/bert_ft_qqp-13
null
Entry not found
15
connectivity/bert_ft_qqp-30
null
Entry not found
15
sahn/distilbert-base-uncased-finetuned-imdb-tag
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-imdb-tag results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9672 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb-tag This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2215 - Accuracy: 0.9672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data For 90% of the sentences, added `10/10` at the end of the sentences with the label 1, and `1/10` with the label 0. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0895 | 1.0 | 1250 | 0.1332 | 0.9638 | | 0.0483 | 2.0 | 2500 | 0.0745 | 0.9772 | | 0.0246 | 3.0 | 3750 | 0.1800 | 0.9666 | | 0.0058 | 4.0 | 5000 | 0.1370 | 0.9774 | | 0.0025 | 5.0 | 6250 | 0.2215 | 0.9672 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1,960
classla/bcms-bertic-parlasent-bcs-bi
[ "Negative", "Other" ]
--- language: "hr" tags: - text-classification - sentiment-analysis widget: - text: "Poštovani potpredsjedničke Vlade i ministre hrvatskih branitelja, mislite li da ste zapravo iznevjerili svoje suborce s kojima ste 555 dana prosvjedovali u šatoru protiv tadašnjih dužnosnika jer ste zapravo donijeli zakon koji je neprovediv, a birali ste si suradnike koji nemaju etički integritet." --- # bcms-bertic-parlasent-bcs-bi Binary text classification model based on [`classla/bcms-bertic`](https://huggingface.co/classla/bcms-bertic) and fine-tuned on the BCS Political Sentiment dataset (sentence-level data). This classifier classifies text into only two categories: Negative vs. Other. For the ternary classifier (Negative, Neutral, Positive) check [this model](https://huggingface.co/classla/bcms-bertic-parlasent-bcs-ter). For details on the dataset and the finetuning procedure, please see [this paper](https://arxiv.org/abs/2206.00929). ## Fine-tuning hyperparameters Fine-tuning was performed with `simpletransformers`. Beforehand a brief sweep for the optimal number of epochs was performed and the presumed best value was 9. Other arguments were kept default. ```python model_args = { "num_train_epochs": 9 } ``` ## Performance in comparison with ternary classifier | model | average macro F1 | |-------------------------------------------|------------------| | bcms-bertic-parlasent-bcs-ter | 0.7941 ± 0.0101 | | bcms-bertic-parlasent-bcs-bi (this model) | 0.8999 ± 0.012 | ## Use example with `simpletransformers==0.63.7` ```python from simpletransformers.classification import ClassificationModel model = ClassificationModel("electra", "classla/bcms-bertic-parlasent-bcs-bi") predictions, logits = model.predict([ "Đački autobusi moraju da voze svaki dan", "Vi niste normalni" ] ) predictions # Output: array([1, 0]) [model.config.id2label[i] for i in predictions] # Output: ['Other', 'Negative'] ``` ## Citation If you use the model, please cite the following paper on which the original model is based: ``` @inproceedings{ljubesic-lauc-2021-bertic, title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian", author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5", pages = "37--42", } ``` and the paper describing the dataset and methods for the current finetuning: ``` @misc{https://doi.org/10.48550/arxiv.2206.00929, doi = {10.48550/ARXIV.2206.00929}, url = {https://arxiv.org/abs/2206.00929}, author = {Mochtak, Michal and Rupnik, Peter and Ljubešič, Nikola}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {The ParlaSent-BCS dataset of sentiment-annotated parliamentary debates from Bosnia-Herzegovina, Croatia, and Serbia}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} } ```
3,340
EventMiner/bigbird-roberta-large-en-doc
null
--- language: en tags: - news event detection - document level - EventMiner license: apache-2.0 --- # EventMiner EventMiner is designed for multilingual news event detection. The goal of news event detection is the automatic extraction of event details from news articles. This event extraction can be done at different levels: document, sentence and word ranging from coarse-granular information to fine-granular information. We submitted the best results based on EventMiner to [CASE 2021 shared task 1: *Multilingual Protest News Detection*](https://competitions.codalab.org/competitions/31247). Our approach won first place in English for the document level task while ranking within the top four solutions for other languages: Portuguese, Spanish, and Hindi. *EventMiner/bigbird-roberta-large-en-doc* is a bigbird-roberta-large sequence classification model fine-tuned on English document level data of the multilingual version of GLOCON gold standard dataset released with [CASE 2021](https://aclanthology.org/2021.case-1.11/). <br> Labels: - Label_0: News article does not contain information about a past or ongoing socio-political event - Label_1: News article contains information about a past or ongoing socio-political event More details about the training procedure are available with our [codebase](https://github.com/HHansi/EventMiner). # How to Use ## Load Model ```python from transformers import BigBirdTokenizer, BigBirdForSequenceClassification model_name = 'EventMiner/bigbird-roberta-large-en-doc' tokenizer = BigBirdTokenizer.from_pretrained(model_name) model = BigBirdForSequenceClassification.from_pretrained(model_name) ``` ## Classification ```python from transformers import pipeline classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) classifier("Police arrested five more student leaders on Monday when implementing the strike call given by MSU students union as a mark of protest against the decision to introduce payment seats in first-year commerce programme.") ``` # Citation If you use this model, please consider citing the following paper. ``` @inproceedings{hettiarachchi-etal-2021-daai, title = "{DAAI} at {CASE} 2021 Task 1: Transformer-based Multilingual Socio-political and Crisis Event Detection", author = "Hettiarachchi, Hansi and Adedoyin-Olowe, Mariam and Bhogal, Jagdev and Gaber, Mohamed Medhat", booktitle = "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.case-1.16", doi = "10.18653/v1/2021.case-1.16", pages = "120--130", } ```
2,812
Abderrahim2/bert-finetuned-Age
[ "Adult", "Aged", "Young" ]
--- license: mit tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: bert-finetuned-Age results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-Age This model is a fine-tuned version of [dbmdz/bert-base-french-europeana-cased](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4642 - F1: 0.7254 - Roc Auc: 0.7940 - Accuracy: 0.7249 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.4564 | 1.0 | 965 | 0.4642 | 0.7254 | 0.7940 | 0.7254 | | 0.4443 | 2.0 | 1930 | 0.4662 | 0.7254 | 0.7940 | 0.7254 | | 0.4388 | 3.0 | 2895 | 0.4628 | 0.7254 | 0.7940 | 0.7254 | | 0.4486 | 4.0 | 3860 | 0.4642 | 0.7254 | 0.7940 | 0.7249 | | 0.4287 | 5.0 | 4825 | 0.4958 | 0.7214 | 0.7907 | 0.7150 | | 0.4055 | 6.0 | 5790 | 0.5325 | 0.6961 | 0.7715 | 0.6782 | | 0.3514 | 7.0 | 6755 | 0.5588 | 0.6586 | 0.7443 | 0.6223 | | 0.3227 | 8.0 | 7720 | 0.5944 | 0.6625 | 0.7470 | 0.6295 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
2,023
vortixhead/distilbert-base-uncased-finetuned-emotion
[ "sadness", "joy", "love", "anger", "fear", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9240758723346115 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2140 - Accuracy: 0.924 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8278 | 1.0 | 250 | 0.3099 | 0.9055 | 0.9032 | | 0.251 | 2.0 | 500 | 0.2140 | 0.924 | 0.9241 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu102 - Datasets 2.2.2 - Tokenizers 0.12.1
1,804
jungealexander/distilbert-base-uncased-finetuned-go_emotions_20220608_1
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_3", "LABEL_4", ...
--- license: apache-2.0 tags: - generated_from_trainer datasets: - go_emotions metrics: - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-go_emotions_20220608_1 results: - task: name: Text Classification type: text-classification dataset: name: go_emotions type: go_emotions args: simplified metrics: - name: F1 type: f1 value: 0.5575026333429091 - name: Accuracy type: accuracy value: 0.43641725027644673 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-go_emotions_20220608_1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the go_emotions dataset. It achieves the following results on the evaluation set: - Loss: 0.0857 - F1: 0.5575 - Roc Auc: 0.7242 - Accuracy: 0.4364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.173 | 1.0 | 679 | 0.1074 | 0.4245 | 0.6455 | 0.2976 | | 0.0989 | 2.0 | 1358 | 0.0903 | 0.5199 | 0.6974 | 0.3972 | | 0.0865 | 3.0 | 2037 | 0.0868 | 0.5504 | 0.7180 | 0.4263 | | 0.0806 | 4.0 | 2716 | 0.0860 | 0.5472 | 0.7160 | 0.4233 | | 0.0771 | 5.0 | 3395 | 0.0857 | 0.5575 | 0.7242 | 0.4364 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
2,169
davidcechak/DNADeberta_finehuman_nontata_promoters
null
Entry not found
15
deepesh0x/bert_wikipedia_sst2
[ "negative", "positive" ]
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - deepesh0x/autotrain-data-bert_wikipedia_sst2 co2_eq_emissions: 16.368556687663705 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1021934687 - CO2 Emissions (in grams): 16.368556687663705 ## Validation Metrics - Loss: 0.15712647140026093 - Accuracy: 0.9503340757238308 - Precision: 0.9515767251616308 - Recall: 0.9598083577322332 - AUC: 0.9857179850355002 - F1: 0.9556748161399324 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-bert_wikipedia_sst2-1021934687 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst2-1021934687", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst2-1021934687", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,227
Yanjie/message-intent-220628
[ "brand", "brand|location", "can_i_help|shopping", "charge", "checkout", "checkout|giftcard", "discount", "discount|better", "discount|exclusion", "discount|other", "discount|service", "escalation|email", "escalation|order_modification", "escalation|other", "escalation|partner", "escala...
Entry not found
15
Someman/distilbert-base-uncased-finetuned-emotion
[ "sadness", "joy", "love", "anger", "fear", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9245803802599059 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2186 - Accuracy: 0.9245 - F1: 0.9246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3083 | 0.9005 | 0.8972 | | No log | 2.0 | 500 | 0.2186 | 0.9245 | 0.9246 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,806
luztraplet/roberta-base-finetuned-boolq
null
--- license: mit tags: - generated_from_trainer datasets: - boolq metrics: - accuracy model-index: - name: roberta-base-finetuned-boolq results: - task: name: Text Classification type: text-classification dataset: name: boolq type: boolq args: default metrics: - name: Accuracy type: accuracy value: 0.7825688073394496 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-boolq This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the boolq dataset. It achieves the following results on the evaluation set: - Loss: 0.4811 - Accuracy: 0.7826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.719487849449238e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 5 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 150 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5966 | 1.0 | 590 | 0.5603 | 0.7269 | | 0.4151 | 2.0 | 1180 | 0.4811 | 0.7826 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,687
sgugger/test-dynamic-pipeline
[ "equivalent", "not equivalent" ]
Entry not found
15
ArnavL/roberta-reviews-imdb-0
null
Entry not found
15
dmrau/bow-bert
null
--- license: afl-3.0 --- <strong>Example on how to load and use BOW-BERT: <strong> ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer # load model model = AutoModelForSequenceClassification.from_pretrained('dmrau/bow-bert') # load tokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') # tokenize query and passage and concatenate them inp = tokenizer(['this is a query','query a is this'], ['this is a passage', 'passage a is this'], return_tensors='pt') # get estimated score print('score', model(**inp).logits[:, 1]) ### outputs identical scores for different ### word orders as the model is order invariant: # scores: [-2.9463, -2.9463] ``` <strong> Cite us:<strong> ``` @article{rau2022role, title={The Role of Complex NLP in Transformers for Text Ranking?}, author={Rau, David and Kamps, Jaap}, journal={arXiv preprint arXiv:2207.02522}, year={2022} } ```
927
jhonparra18/bert-base-uncased-cv-position-classifier
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28", "LABEL_29",...
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision model-index: - name: bert-base-uncased-cv-position-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-cv-position-classifier This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6924 - Accuracy: {'accuracy': 0.5780703216130645} - F1: {'f1': 0.5780703216130645} - Precision: {'precision': 0.5780703216130645} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------:|:---------------------------------:| | 2.0336 | 1.14 | 1000 | 1.8856 | {'accuracy': 0.5259123479420097} | {'f1': 0.5259123479420097} | {'precision': 0.5259123479420097} | | 1.5348 | 2.28 | 2000 | 1.6924 | {'accuracy': 0.5780703216130645} | {'f1': 0.5780703216130645} | {'precision': 0.5780703216130645} | ### Framework versions - Transformers 4.20.1 - Pytorch 1.8.1+cu111 - Datasets 1.6.2 - Tokenizers 0.12.1
1,911
Kozias/BERT-v11
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
Entry not found
15
jhonparra18/roberta-base-cv-studio_name-medium
[ "Agile Delivery", "Business Hacking", "Cloud Ops", "Data and AI", "Design", "Digital Marketing", "Digital eXperience Platforms", "Enterprise Apps", "Gaming", "Generic", "Process Optimization", "Product Acceleration", "Quality Engineering", "Salesforce", "Scalable Platforms", "Staff Gen...
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-cv-studio_name-medium results: [] widget: - text: "Egresado de la carrera Ingeniería en Computación Conocimientos de lenguajes HTML, CSS, Javascript y MySQL. Experiencia trabajando en ámbitos de redes de pequeña y mediana escala. Inglés Hablado nivel básico, escrito nivel intermedio.HTML, CSS y JavaScript. Realidad aumentada. Lenguaje R. HTML5, JavaScript y Nodejs" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-cv-studio_name-medium This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. ## Model description Predicts a studio name based on a CV text ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 10 ### Framework versions - Transformers 4.19.0 - Pytorch 1.8.2+cu111 - Datasets 1.6.2 - Tokenizers 0.12.1
1,305
UT/BRTW_MULICLASS
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4" ]
Entry not found
15
zhenglianchi/unAPI-train-model
null
Entry not found
15
JovenPai/bert_cn_finetunning
[ "LABEL_0", "LABEL_1" ]
Entry not found
15
NDugar/v2xl-again-mnli
[ "contradiction", "entailment", "neutral" ]
--- language: en tags: - deberta-v1 - deberta-mnli tasks: mnli thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit pipeline_tag: zero-shot-classification --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This is the DeBERTa large model fine-tuned with MNLI task. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | -------- #### Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp** ```bash cd transformers/examples/text-classification/ export TASK_NAME=mrpc python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\n--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\n--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
3,876
allenai/longformer-scico
[ "child", "coref", "not related", "parent" ]
--- language: en tags: - longformer - longformer-scico license: apache-2.0 datasets: - allenai/scico inference: false --- # Longformer for SciCo This model is the `unified` model discussed in the paper [SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts (AKBC 2021)](https://openreview.net/forum?id=OFLbgUP04nC) that formulates the task of hierarchical cross-document coreference resolution (H-CDCR) as a multiclass problem. The model takes as input two mentions `m1` and `m2` with their corresponding context and outputs 4 scores: * 0: not related * 1: `m1` and `m2` corefer * 2: `m1` is a parent of `m2` * 3: `m1` is a child of `m2`. We provide the following code as an example to set the global attention on the special tokens: `<s>`, `<m>` and `</m>`. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch tokenizer = AutoTokenizer.from_pretrained('allenai/longformer-scico') model = AutoModelForSequenceClassification.from_pretrained('allenai/longformer-scico') start_token = tokenizer.convert_tokens_to_ids("<m>") end_token = tokenizer.convert_tokens_to_ids("</m>") def get_global_attention(input_ids): global_attention_mask = torch.zeros(input_ids.shape) global_attention_mask[:, 0] = 1 # global attention to the CLS token start = torch.nonzero(input_ids == start_token) # global attention to the <m> token end = torch.nonzero(input_ids == end_token) # global attention to the </m> token globs = torch.cat((start, end)) value = torch.ones(globs.shape[0]) global_attention_mask.index_put_(tuple(globs.t()), value) return global_attention_mask m1 = "In this paper we present the results of an experiment in <m> automatic concept and definition extraction </m> from written sources of law using relatively simple natural methods." m2 = "This task is important since many natural language processing (NLP) problems, such as <m> information extraction </m>, summarization and dialogue." inputs = m1 + " </s></s> " + m2 tokens = tokenizer(inputs, return_tensors='pt') global_attention_mask = get_global_attention(tokens['input_ids']) with torch.no_grad(): output = model(tokens['input_ids'], tokens['attention_mask'], global_attention_mask) scores = torch.softmax(output.logits, dim=-1) # tensor([[0.0818, 0.0023, 0.0019, 0.9139]]) -- m1 is a child of m2 ``` **Note:** There is a slight difference between this model and the original model presented in the [paper](https://openreview.net/forum?id=OFLbgUP04nC). The original model includes a single linear layer on top of the `<s>` token (equivalent to `[CLS]`) while this model includes a two-layers MLP to be in line with `LongformerForSequenceClassification`. The original repository can be found [here](https://github.com/ariecattan/scico). # Citation ```python @inproceedings{ cattan2021scico, title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts}, author={Arie Cattan and Sophie Johnson and Daniel S Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope}, booktitle={3rd Conference on Automated Knowledge Base Construction}, year={2021}, url={https://openreview.net/forum?id=OFLbgUP04nC} } ```
3,238
andrewlitv/distilbert-base-uncased-finetuned-cola
null
Entry not found
15
annafavaro/distilbert-base-uncased-finetuned-cola
null
Entry not found
15
baykenney/bert-large-gpt2detector-topk40
[ "Human", "Machine" ]
Entry not found
15
beomi/korean-hatespeech-multilabel
[ "bias_others", "bias_gender", "offensive", "hate" ]
Entry not found
15
berkergurcay/finetuned-roberta
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
classla/bcms-bertic-frenk-hate
null
--- language: "hr" tags: - text-classification - hate-speech widget: - text: "Potpredsjednik Vlade i ministar branitelja Tomo Medved komentirao je Vladine planove za zakonsku zabranu pozdrava 'za dom spremni'." --- # bcms-bertic-frenk-hate Text classification model based on [`classla/bcms-bertic`](https://huggingface.co/classla/bcms-bertic) and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the Croatian subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable). ## Fine-tuning hyperparameters Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are: ```python model_args = { "num_train_epochs": 12, "learning_rate": 1e-5, "train_batch_size": 74} ``` ## Performance The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed. | model | average accuracy | average macro F1 | |----------------------------|------------------|------------------| | bcms-bertic-frenk-hate | 0.8313 | 0.8219 | | EMBEDDIA/crosloengual-bert | 0.8054 | 0.796 | | xlm-roberta-base | 0.7175 | 0.7049 | | fasttext | 0.771 | 0.754 | From recorded accuracies and macro F1 scores p-values were also calculated: Comparison with `crosloengual-bert`: | test | accuracy p-value | macro F1 p-value | |----------------|------------------|------------------| | Wilcoxon | 0.00781 | 0.00781 | | Mann Whithney | 0.00108 | 0.00108 | | Student t-test | 2.43e-10 | 1.27e-10 | Comparison with `xlm-roberta-base`: | test | accuracy p-value | macro F1 p-value | |----------------|------------------|------------------| | Wilcoxon | 0.00781 | 0.00781 | | Mann Whithney | 0.00107 | 0.00108 | | Student t-test | 4.83e-11 | 5.61e-11 | ## Use examples ```python from simpletransformers.classification import ClassificationModel model = ClassificationModel( "bert", "5roop/bcms-bertic-frenk-hate", use_cuda=True, ) predictions, logit_output = model.predict(['Ne odbacujem da će RH primiti još migranata iz Afganistana, no neće biti novog vala', "Potpredsjednik Vlade i ministar branitelja Tomo Medved komentirao je Vladine planove za zakonsku zabranu pozdrava 'za dom spremni' "]) predictions ### Output: ### array([0, 0]) ``` ## Citation If you use the model, please cite the following paper on which the original model is based: ``` @inproceedings{ljubesic-lauc-2021-bertic, title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian", author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5", pages = "37--42", } ``` and the dataset used for fine-tuning: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ```
3,883
emrecan/convbert-base-turkish-mc4-cased-snli_tr
[ "contradiction", "entailment", "neutral" ]
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: apache-2.0 datasets: - nli_tr widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" ---
332
idjotherwise/autonlp-reading_prediction-172506
[ "target" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - idjotherwise/autonlp-data-reading_prediction --- # Model Trained Using AutoNLP - Problem type: Single Column Regression - Model ID: 172506 ## Validation Metrics - Loss: 0.03257797285914421 - MSE: 0.03257797285914421 - MAE: 0.14246532320976257 - R2: 0.9693824457290849 - RMSE: 0.18049369752407074 - Explained Variance: 0.9699198007583618 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/idjotherwise/autonlp-reading_prediction-172506 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("idjotherwise/autonlp-reading_prediction-172506") tokenizer = AutoTokenizer.from_pretrained("idjotherwise/autonlp-reading_prediction-172506") inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,082
justin871030/bert-base-uncased-goemotions-original-finetuned
[ "admiration", "amusement", "anger", "annoyance", "approval", "caring", "confusion", "curiosity", "desire", "disappointment", "disapproval", "disgust", "embarrassment", "excitement", "fear", "gratitude", "grief", "joy", "love", "nervousness", "neutral", "optimism", "pride"...
--- language: en tags: - go-emotion - text-classification - pytorch datasets: - go_emotions metrics: - f1 widget: - text: "Thanks for giving advice to the people who need it! 👌🙏" license: mit --- ## Model Description 1. Based on the uncased BERT pretrained model with a linear output layer. 2. Added several commonly-used emoji and tokens to the special token list of the tokenizer. 3. Did label smoothing while training. 4. Used weighted loss and focal loss to help the cases which trained badly. ## Results Best Result of `Macro F1` - 53% ## Tutorial Link - [GitHub](https://github.com/justin871030/GoEmotions)
615
mrm8488/deberta-v3-small-finetuned-mnli
[ "contradiction", "entailment", "neutral" ]
--- language: - en license: mit tags: - generated_from_trainer - deberta-v3 datasets: - glue metrics: - accuracy model-index: - name: ds_results results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.874593165174939 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DeBERTa v3 (small) fine-tuned on MNLI This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4985 - Accuracy: 0.8746 ## Model description [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up. The DeBERTa V3 small model comes with 6 layers and a hidden size of 768. Its total parameter number is 143M since we use a vocabulary containing 128K tokens which introduce 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2. ## Intended uses & limitations More information needed ## Training and evaluation data The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7773 | 0.04 | 1000 | 0.5241 | 0.7984 | | 0.546 | 0.08 | 2000 | 0.4629 | 0.8194 | | 0.5032 | 0.12 | 3000 | 0.4704 | 0.8274 | | 0.4711 | 0.16 | 4000 | 0.4383 | 0.8355 | | 0.473 | 0.2 | 5000 | 0.4652 | 0.8305 | | 0.4619 | 0.24 | 6000 | 0.4234 | 0.8386 | | 0.4542 | 0.29 | 7000 | 0.4825 | 0.8349 | | 0.4468 | 0.33 | 8000 | 0.3985 | 0.8513 | | 0.4288 | 0.37 | 9000 | 0.4084 | 0.8493 | | 0.4354 | 0.41 | 10000 | 0.3850 | 0.8533 | | 0.423 | 0.45 | 11000 | 0.3855 | 0.8509 | | 0.4167 | 0.49 | 12000 | 0.4122 | 0.8513 | | 0.4129 | 0.53 | 13000 | 0.4009 | 0.8550 | | 0.4135 | 0.57 | 14000 | 0.4136 | 0.8544 | | 0.4074 | 0.61 | 15000 | 0.3869 | 0.8595 | | 0.415 | 0.65 | 16000 | 0.3911 | 0.8517 | | 0.4095 | 0.69 | 17000 | 0.3880 | 0.8593 | | 0.4001 | 0.73 | 18000 | 0.3907 | 0.8587 | | 0.4069 | 0.77 | 19000 | 0.3686 | 0.8630 | | 0.3927 | 0.81 | 20000 | 0.4008 | 0.8593 | | 0.3958 | 0.86 | 21000 | 0.3716 | 0.8639 | | 0.4016 | 0.9 | 22000 | 0.3594 | 0.8679 | | 0.3945 | 0.94 | 23000 | 0.3595 | 0.8679 | | 0.3932 | 0.98 | 24000 | 0.3577 | 0.8645 | | 0.345 | 1.02 | 25000 | 0.4080 | 0.8699 | | 0.2885 | 1.06 | 26000 | 0.3919 | 0.8674 | | 0.2858 | 1.1 | 27000 | 0.4346 | 0.8651 | | 0.2872 | 1.14 | 28000 | 0.4105 | 0.8674 | | 0.3002 | 1.18 | 29000 | 0.4133 | 0.8708 | | 0.2954 | 1.22 | 30000 | 0.4062 | 0.8667 | | 0.2912 | 1.26 | 31000 | 0.3972 | 0.8708 | | 0.2958 | 1.3 | 32000 | 0.3713 | 0.8732 | | 0.293 | 1.34 | 33000 | 0.3717 | 0.8715 | | 0.3001 | 1.39 | 34000 | 0.3826 | 0.8716 | | 0.2864 | 1.43 | 35000 | 0.4155 | 0.8694 | | 0.2827 | 1.47 | 36000 | 0.4224 | 0.8666 | | 0.2836 | 1.51 | 37000 | 0.3832 | 0.8744 | | 0.2844 | 1.55 | 38000 | 0.4179 | 0.8699 | | 0.2866 | 1.59 | 39000 | 0.3969 | 0.8681 | | 0.2883 | 1.63 | 40000 | 0.4000 | 0.8683 | | 0.2832 | 1.67 | 41000 | 0.3853 | 0.8688 | | 0.2876 | 1.71 | 42000 | 0.3924 | 0.8677 | | 0.2855 | 1.75 | 43000 | 0.4177 | 0.8719 | | 0.2845 | 1.79 | 44000 | 0.3877 | 0.8724 | | 0.2882 | 1.83 | 45000 | 0.3961 | 0.8713 | | 0.2773 | 1.87 | 46000 | 0.3791 | 0.8740 | | 0.2767 | 1.91 | 47000 | 0.3877 | 0.8779 | | 0.2772 | 1.96 | 48000 | 0.4022 | 0.8690 | | 0.2816 | 2.0 | 49000 | 0.3837 | 0.8732 | | 0.2068 | 2.04 | 50000 | 0.4644 | 0.8720 | | 0.1914 | 2.08 | 51000 | 0.4919 | 0.8744 | | 0.2 | 2.12 | 52000 | 0.4870 | 0.8702 | | 0.1904 | 2.16 | 53000 | 0.5038 | 0.8737 | | 0.1915 | 2.2 | 54000 | 0.5232 | 0.8711 | | 0.1956 | 2.24 | 55000 | 0.5192 | 0.8747 | | 0.1911 | 2.28 | 56000 | 0.5215 | 0.8761 | | 0.2053 | 2.32 | 57000 | 0.4604 | 0.8738 | | 0.2008 | 2.36 | 58000 | 0.5162 | 0.8715 | | 0.1971 | 2.4 | 59000 | 0.4886 | 0.8754 | | 0.192 | 2.44 | 60000 | 0.4921 | 0.8725 | | 0.1937 | 2.49 | 61000 | 0.4917 | 0.8763 | | 0.1931 | 2.53 | 62000 | 0.4789 | 0.8778 | | 0.1964 | 2.57 | 63000 | 0.4997 | 0.8721 | | 0.2008 | 2.61 | 64000 | 0.4748 | 0.8756 | | 0.1962 | 2.65 | 65000 | 0.4840 | 0.8764 | | 0.2029 | 2.69 | 66000 | 0.4889 | 0.8767 | | 0.1927 | 2.73 | 67000 | 0.4820 | 0.8758 | | 0.1926 | 2.77 | 68000 | 0.4857 | 0.8762 | | 0.1919 | 2.81 | 69000 | 0.4836 | 0.8749 | | 0.1911 | 2.85 | 70000 | 0.4859 | 0.8742 | | 0.1897 | 2.89 | 71000 | 0.4853 | 0.8766 | | 0.186 | 2.93 | 72000 | 0.4946 | 0.8768 | | 0.2011 | 2.97 | 73000 | 0.4851 | 0.8767 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
8,068
mrm8488/deberta-v3-small-finetuned-mrpc
[ "equivalent", "not_equivalent" ]
--- language: - en license: mit tags: - generated_from_trainer - deberta-v3 datasets: - glue metrics: - accuracy - f1 model-index: - name: deberta-v3-small results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8921568627450981 - name: F1 type: f1 value: 0.9233449477351917 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DeBERTa v3 (small) fine-tuned on MRPC This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.2787 - Accuracy: 0.8922 - F1: 0.9233 - Combined Score: 0.9078 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | No log | 1.0 | 230 | 0.2787 | 0.8922 | 0.9233 | 0.9078 | | No log | 2.0 | 460 | 0.3651 | 0.875 | 0.9137 | 0.8944 | | No log | 3.0 | 690 | 0.5238 | 0.8799 | 0.9179 | 0.8989 | | No log | 4.0 | 920 | 0.4712 | 0.8946 | 0.9222 | 0.9084 | | 0.2147 | 5.0 | 1150 | 0.5704 | 0.8946 | 0.9262 | 0.9104 | | 0.2147 | 6.0 | 1380 | 0.5697 | 0.8995 | 0.9284 | 0.9140 | | 0.2147 | 7.0 | 1610 | 0.6651 | 0.8922 | 0.9214 | 0.9068 | | 0.2147 | 8.0 | 1840 | 0.6726 | 0.8946 | 0.9239 | 0.9093 | | 0.0183 | 9.0 | 2070 | 0.7250 | 0.8848 | 0.9177 | 0.9012 | | 0.0183 | 10.0 | 2300 | 0.7093 | 0.8922 | 0.9223 | 0.9072 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
2,616
sc2qa/msmarco_qa_classifier
null
For details, please refer to the following links. Github repo: https://github.com/amazon-research/SC2QA-DRIL Paper: [Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning](https://arxiv.org/pdf/2109.04689.pdf)
270
textattack/facebook-bart-large-QNLI
null
Entry not found
15
yaoyinnan/bert-base-chinese-covid19
[ "Neutral", "Fake", "Real" ]
Entry not found
15
inovex/multi2convai-quality-fr-mbert
[ "neo.magnetklammern", "neo.start", "neo.back", "neo.gearbox", "neo.motor.brushcollar", "neo.motor.worm", "neo.magnet", "neo.magnetisierung", "neo.motor", "neo.verschaubung", "neo.zusammenfuehrung", "neo.zahnradgross", "neo.zahnradklein", "neo.yes", "neo.no", "neo.einpressen", "neo.mo...
--- tags: - text-classification widget: - text: "Lancer le programme" license: mit language: fr --- # Multi2ConvAI-Quality: finetuned MBert for French This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: French (fr) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-fr-mbert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-fr-mbert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
969
ali2066/twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11
null
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4118 - Accuracy: 0.8446 - F1: 0.8968 - Precision: 0.8740 - Recall: 0.9207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.3532 | 0.8451 | 0.8990 | 0.8997 | 0.8983 | | 0.4111 | 2.0 | 780 | 0.3381 | 0.8561 | 0.9080 | 0.8913 | 0.9253 | | 0.3031 | 3.0 | 1170 | 0.3490 | 0.8537 | 0.9034 | 0.9152 | 0.8919 | | 0.2408 | 4.0 | 1560 | 0.3562 | 0.8671 | 0.9148 | 0.9 | 0.9300 | | 0.2408 | 5.0 | 1950 | 0.3725 | 0.8659 | 0.9131 | 0.9074 | 0.9189 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
1,963
ActivationAI/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.928 - name: F1 type: f1 value: 0.9280065074208208 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2128 - Accuracy: 0.928 - F1: 0.9280 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8151 | 1.0 | 250 | 0.3043 | 0.907 | 0.9035 | | 0.24 | 2.0 | 500 | 0.2128 | 0.928 | 0.9280 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,805
bongbongco/bert-badword-puri-000
null
Entry not found
15
hackathon-pln-es/twitter_sexismo-finetuned-exist2021-metwo
null
--- license: apache-2.0 tags: - datasets: - EXIST Dataset - MeTwo Machismo and Sexism Twitter Identification dataset widget: - text: "manejas muy bien para ser mujer" - text: "En temas políticos hombres y mujeres son iguales" - text: "Los ipad son unos equipos electrónicos" metrics: - accuracy model-index: - name: twitter_sexismo-finetuned-exist2021 results: - task: name: Text Classification type: text-classification dataset: name: EXIST Dataset type: EXIST Dataset args: es metrics: - name: Accuracy type: accuracy value: 0.83 --- # twitter_sexismo-finetuned-exist2021 This model is a fine-tuned version of [pysentimiento/robertuito-hate-speech](https://huggingface.co/pysentimiento/robertuito-hate-speech) on the EXIST dataset and MeTwo: Machismo and Sexism Twitter Identification dataset https://github.com/franciscorodriguez92/MeTwo. It achieves the following results on the evaluation set: - Loss: 0.54 - Accuracy: 0.83 ## Model description Model for the 'Somos NLP' Hackathon for detecting sexism in twitters in Spanish. Created by: - **medardodt** - **MariaIsabel** - **ManRo** - **lucel172** - **robertou2** ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - my_learning_rate = 5E-5 - my_adam_epsilon = 1E-8 - my_number_of_epochs = 8 - my_warmup = 3 - my_mini_batch_size = 32 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results |Epoch|Training Loss|Validation Loss|Accuracy|F1|Precision|| |----|-------|-------|-------|-------|-------|-------| |1|0.389900 |0.397857 |0.827133 |0.699620 |0.786325 | |2|0.064400 |0.544625 |0.831510 |0.707224 |0.794872 | |3|0.004800 |0.837723 |0.818381 |0.704626 |0.733333 | |4|0.000500 |1.045066 |0.820569 | 0.702899 |0.746154 | |5|0.000200 |1.172727 |0.805252 |0.669145 |0.731707 | |6|0.000200 |1.202422 |0.827133 |0.720848 |0.744526 | |7|0.000000 |1.195012 |0.827133 |0.718861 |0.748148 | |8|0.000100 |1.215515 |0.824945 |0.705882 |0.761905 | |9|0.000100|1.233099 |0.827133 |0.710623 |0.763780 | |10|0.000100|1.237268 |0.829322 |0.713235 |0.769841 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.6 ## Model in Action Fast usage with pipelines: ``` python ###libraries required !pip install transformers from transformers import pipeline ### usage pipelines model_checkpoint = "hackathon-pln-es/twitter_sexismo-finetuned-exist2021-metwo" pipeline_nlp = pipeline("text-classification", model=model_checkpoint) pipeline_nlp("mujer al volante peligro!") #pipeline_nlp("¡me encanta el ipad!") #pipeline_nlp (["mujer al volante peligro!", "Los hombre tienen más manias que las mujeres", "me encanta el ipad!"] ) # OUTPUT MODEL # # LABEL_0: "NON SEXISM"or LABEL_1: "SEXISM" and score: probability of accuracy per model. # [{'label': 'LABEL_1', 'score': 0.9967633485794067}] # [{'label': 'LABEL_0', 'score': 0.9934417009353638}] #[{‘label': 'LABEL_1', 'score': 0.9967633485794067}, # {'label': 'LABEL_1', 'score': 0.9755664467811584}, # {'label': 'LABEL_0', 'score': 0.9955045580863953}] ``` ## More Information Process ### Retos Uno de los principales retos que se encontró en este proceso ha sido disponer de un dataset en español. Se ha logrado conseguir (previa solicitud) el dataset utilizado en [EXIST:sEXism Identification in Social neTworks](http://nlp.uned.es/exist2021/), el cual fue un gran punto de partida para comenzar con el modelo. Lamentablemente este un dataset presenta limitaciones debido a licencias y políticas para ser compartido libremente. Este dataset incorpora cualquier tipo de expresión sexista o fenómenos relacionados, incluidas las afirmaciones descriptivas o informadas donde el mensaje sexista es un informe o una descripción de un comportamiento sexista. se han utilizado los 3,541 tweets etiquetados en español. Luego se logró disponer de otro dataset en español [MeTwo: Machismo and Sexism Twitter Identification dataset](https://github.com/franciscorodriguez92/MeTwo). Este dataset contiene los id de cada tweet con su etiqueta respectiva, lo que nos permitió obtener el texto del tweet e incrementar el dataset original. Un desafío ha sido iniciar los procesos de finetuned en las prueba, esto pues se dispone de diversas variables para validar y testear (desde modelos como: BETO o Roberta, hasta hiperparámetros: como learning rate), y solo se disponede un plazo acotado de dos semanas, además de la curva de aprendizaje. Para este desafío, se han basado las primeras pruebas en los parámetros presentados por de Paula et al. (2021), lo cual brindó un punto de partida y un reto a vencer, el **_0.790 de accuracy_** obtenidos por el trabajo previo en la identificación de tweets sexistas en español. En este ámbito se realizaron diversas pruebas en paralelo para encontrar el mejor modelo. Luego de un proceso colaborativo de finetuned se ha logrado obtener un **83% de accuracy**. ### Trabajos Futuros Se propone incrementar el dataset desarrollado. Para esto es posible descargar cantidades superiores de tweets en español y aplicar técnicas de active learning para obtener un grupo reducido de tweets a etiquetar vía crowdsourcing, y en donde estos datos etiquetados puedan servir para etiquetar el resto. También se pueden utilizar técnicas de Data Augmentation, para duplicar y extender el dataset. Realizar más pruebas con otros modelos y mejorar el modelo es otro reto que se propone como trabajos futuros. ### Posibles Aplicaciones Primero es sumamente importante dar mayor visibilidad al problema de _sexismo en redes sociales_, principalmente en español. El proceso de Transfer Learning logra reutilizar y aprovechar modelos previamente entrenados, y lo que se desea es que nuevos grupos de investigación, estudiantes, etc. utilicen la base del actual modelo para desarrollar los propios y crear un mejor modelo. De esta manera, se podría construir una herramienta que pueda identificar en tiempo real los tweets sexistas y eliminarlos antes de su propagación. ### Referencias 1 de Paula, A. F. M., da Silva, R. F., & Schlicht, I. B. (2021). Sexism Prediction in Spanish and English Tweets Using Monolingual and Multilingual BERT and Ensemble Models. arXiv preprint arXiv:2111.04551. Rodríguez-Sánchez, F., Carrillo-de-Albornoz, J., Plaza, L., Gonzalo, J., Rosso, P., Comet, M., & Donoso, T. (2021). Overview of exist 2021: sexism identification in social networks. Procesamiento del Lenguaje Natural, 67, 195-207.
6,690
jkhan447/sentiment-model-sample-group-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: sentiment-model-sample-group-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-model-sample-group-emotion This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4604 - Accuracy: 0.7004 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
1,179
RomanEnikeev/distilbert-base-uncased-finetuned-cola
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5670814703238499 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8265 - Matthews Correlation: 0.5671 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5216 | 1.0 | 535 | 0.5536 | 0.4041 | | 0.3481 | 2.0 | 1070 | 0.5242 | 0.5206 | | 0.2372 | 3.0 | 1605 | 0.6162 | 0.5311 | | 0.1701 | 4.0 | 2140 | 0.7704 | 0.5461 | | 0.1304 | 5.0 | 2675 | 0.8265 | 0.5671 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
1,999
cj-mills/distilbert-base-uncased-finetuned-emotion
[ "sadness", "joy", "love", "anger", "fear", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.936 - name: F1 type: f1 value: 0.9361334972007946 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2205 - Accuracy: 0.936 - F1: 0.9361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0442 | 1.0 | 250 | 0.2392 | 0.926 | 0.9265 | | 0.0463 | 2.0 | 500 | 0.2205 | 0.936 | 0.9361 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
1,799
Sleoruiz/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.927 - name: F1 type: f1 value: 0.9273201074587852 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2176 - Accuracy: 0.927 - F1: 0.9273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8252 | 1.0 | 250 | 0.3121 | 0.916 | 0.9140 | | 0.2471 | 2.0 | 500 | 0.2176 | 0.927 | 0.9273 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
1,798
Manishkalra/finetuning-sentiment-model-3000-samples
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.87 - name: F1 type: f1 value: 0.8769716088328076 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3186 - Accuracy: 0.87 - F1: 0.8770 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
1,505
MartinoMensio/racism-models-raw-label-epoch-4
null
--- language: es license: mit widget: - text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!" --- ### Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `raw-label-epoch-4` ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'raw-label-epoch-4' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) # [{'label': 'racist', 'score': 0.921501636505127}, {'label': 'non-racist', 'score': 0.9459075331687927}] ``` For more details, see https://github.com/preyero/neatclass22
4,251
azert99/finetuning-sentiment-model-3000-samples
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8766666666666667 - name: F1 type: f1 value: 0.8817891373801918 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3223 - Accuracy: 0.8767 - F1: 0.8818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
1,521
anshr/distilbert_reward_model_01
null
Entry not found
15
RajaRang/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.9251264359849074 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2183 - Accuracy: 0.925 - F1: 0.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8002 | 1.0 | 250 | 0.3094 | 0.9065 | 0.9038 | | 0.2409 | 2.0 | 500 | 0.2183 | 0.925 | 0.9251 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,805
Sarim24/distilbert-base-uncased-finetuned-emotion
null
0
ml4pubmed/xtremedistil-l12-h384-uncased_pub_section
[ "BACKGROUND", "CONCLUSIONS", "METHODS", "OBJECTIVE", "RESULTS" ]
--- language: - en datasets: - pubmed metrics: - f1 tags: - text-classification - document sections - sentence classification - document classification - medical - health - biomedical pipeline_tag: text-classification widget: - text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions." example_title: "background example" - text: "a total of 192 mi patients and 140 control persons were included." example_title: "methods example" - text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)" example_title: "results example" - text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression." example_title: "conclusions example" - text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995." example_title: "objective example" --- # xtremedistil-l12-h384-uncased_pub_section - original model file name: textclassifer_xtremedistil-l12-h384-uncased_pubmed_20k - This is a fine-tuned checkpoint of `microsoft/xtremedistil-l12-h384-uncased` for document section text classification - possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS, ## usage in python install transformers as needed: `pip install -U transformers` run the following, changing the example text to your use case: ``` from transformers import pipeline model_tag = "ml4pubmed/xtremedistil-l12-h384-uncased_pub_section" classifier = pipeline( 'text-classification', model=model_tag, ) prompt = """ Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. """ classifier( prompt, ) # classify the sentence ``` ## metadata ### training_parameters - date_run: Apr-24-2022_t-12 - huggingface_tag: microsoft/xtremedistil-l12-h384-uncased
2,464
laurens88/finetuning-crypto-tweet-sentiment-test
[ "NEG", "NEU", "POS" ]
--- tags: - generated_from_trainer model-index: - name: finetuning-crypto-tweet-sentiment-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-crypto-tweet-sentiment-test This model is a fine-tuned version of [finiteautomata/bertweet-base-sentiment-analysis](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Tokenizers 0.12.1
1,094
James-kc-min/F_Roberta_classifier2
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: F_Roberta_classifier2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # F_Roberta_classifier2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1317 - Accuracy: 0.9751 - F1: 0.9751 - Precision: 0.9751 - Recall: 0.9751 - C Report: precision recall f1-score support 0 0.97 0.98 0.98 1467 1 0.98 0.97 0.98 1466 accuracy 0.98 2933 macro avg 0.98 0.98 0.98 2933 weighted avg 0.98 0.98 0.98 2933 - C Matrix: None ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | C Report | C Matrix | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------:| | 0.1626 | 1.0 | 614 | 0.0936 | 0.9707 | 0.9707 | 0.9707 | 0.9707 | precision recall f1-score support 0 0.97 0.97 0.97 1467 1 0.97 0.97 0.97 1466 accuracy 0.97 2933 macro avg 0.97 0.97 0.97 2933 weighted avg 0.97 0.97 0.97 2933 | None | | 0.0827 | 2.0 | 1228 | 0.0794 | 0.9731 | 0.9731 | 0.9731 | 0.9731 | precision recall f1-score support 0 0.96 0.98 0.97 1467 1 0.98 0.96 0.97 1466 accuracy 0.97 2933 macro avg 0.97 0.97 0.97 2933 weighted avg 0.97 0.97 0.97 2933 | None | | 0.0525 | 3.0 | 1842 | 0.1003 | 0.9737 | 0.9737 | 0.9737 | 0.9737 | precision recall f1-score support 0 0.97 0.98 0.97 1467 1 0.98 0.97 0.97 1466 accuracy 0.97 2933 macro avg 0.97 0.97 0.97 2933 weighted avg 0.97 0.97 0.97 2933 | None | | 0.0329 | 4.0 | 2456 | 0.1184 | 0.9751 | 0.9751 | 0.9751 | 0.9751 | precision recall f1-score support 0 0.98 0.97 0.98 1467 1 0.97 0.98 0.98 1466 accuracy 0.98 2933 macro avg 0.98 0.98 0.98 2933 weighted avg 0.98 0.98 0.98 2933 | None | | 0.0179 | 5.0 | 3070 | 0.1317 | 0.9751 | 0.9751 | 0.9751 | 0.9751 | precision recall f1-score support 0 0.97 0.98 0.98 1467 1 0.98 0.97 0.98 1466 accuracy 0.98 2933 macro avg 0.98 0.98 0.98 2933 weighted avg 0.98 0.98 0.98 2933 | None | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.2.0 - Tokenizers 0.12.1
4,645
connectivity/bert_ft_qqp-11
null
Entry not found
15
connectivity/bert_ft_qqp-16
null
Entry not found
15
BaxterAI/finetuning-sentiment-model-3000-samples
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_polarity metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: amazon_polarity type: amazon_polarity args: amazon_polarity metrics: - name: Accuracy type: accuracy value: 0.9225 - name: F1 type: f1 value: 0.9240816326530612 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset. It achieves the following results on the evaluation set: - Loss: 0.8170 - Accuracy: 0.9225 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1,559
Rewire/XTC
null
(COMING SOON!) MULTILINGUAL HATECHECK: Functional Tests for Multilingual Hate Speech Detection Models
102