index
int64
0
22.3k
modelId
stringlengths
8
111
label
list
readme
stringlengths
0
385k
1,117
j-hartmann/purchase-intention-english-roberta-large
[ "no", "yes" ]
--- language: "en" tags: - roberta - sentiment - twitter widget: - text: "This looks tasty. Where can I buy it??" - text: "Now I want this, too." - text: "You look great today!" - text: "I just love spring and sunshine!" --- This RoBERTa-based model can classify *expressed purchase intentions* in English language text in 2 classes: - purchase intention 🤩 - no purchase intention 😐 The model was fine-tuned on 2,000 manually annotated social media posts. The hold-out accuracy is 95% (vs. a balanced 50% random-chance baseline). For details on the training approach see Web Appendix F in Hartmann et al. (2021). # Application ```python from transformers import pipeline classifier = pipeline("text-classification", model="j-hartmann/purchase-intention-english-roberta-large", return_all_scores=True) classifier("I want this!") ``` ```python Output: [[{'label': 'no', 'score': 0.0014553926885128021}, {'label': 'yes', 'score': 0.9985445737838745}]] ``` # Reference Please cite [this paper](https://journals.sagepub.com/doi/full/10.1177/00222437211037258) when you use our model. Feel free to reach out to [jochen.hartmann@tum.de](mailto:jochen.hartmann@tum.de) with any questions or feedback you may have. ``` @article{hartmann2021, title={The Power of Brand Selfies}, author={Hartmann, Jochen and Heitmann, Mark and Schamp, Christina and Netzer, Oded}, journal={Journal of Marketing Research} year={2021} } ```
1,118
j-hartmann/sentiment-roberta-large-english-3-classes
[ "negative", "neutral", "positive" ]
--- language: "en" tags: - roberta - sentiment - twitter widget: - text: "Oh no. This is bad.." - text: "To be or not to be." - text: "Oh Happy Day" --- This RoBERTa-based model can classify the sentiment of English language text in 3 classes: - positive 😀 - neutral 😐 - negative 🙁 The model was fine-tuned on 5,304 manually annotated social media posts. The hold-out accuracy is 86.1%. For details on the training approach see Web Appendix F in Hartmann et al. (2021). # Application ```python from transformers import pipeline classifier = pipeline("text-classification", model="j-hartmann/sentiment-roberta-large-english-3-classes", return_all_scores=True) classifier("This is so nice!") ``` ```python Output: [[{'label': 'negative', 'score': 0.00016451838018838316}, {'label': 'neutral', 'score': 0.000174045650055632}, {'label': 'positive', 'score': 0.9996614456176758}]] ``` # Reference Please cite [this paper](https://journals.sagepub.com/doi/full/10.1177/00222437211037258) when you use our model. Feel free to reach out to [jochen.hartmann@tum.de](mailto:jochen.hartmann@tum.de) with any questions or feedback you may have. ``` @article{hartmann2021, title={The Power of Brand Selfies}, author={Hartmann, Jochen and Heitmann, Mark and Schamp, Christina and Netzer, Oded}, journal={Journal of Marketing Research} year={2021} } ```
1,120
jaehyeong/koelectra-base-v3-generalized-sentiment-analysis
[ "0", "1" ]
# Usage ```python # import library import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline # load model tokenizer = AutoTokenizer.from_pretrained("jaehyeong/koelectra-base-v3-generalized-sentiment-analysis") model = AutoModelForSequenceClassification.from_pretrained("jaehyeong/koelectra-base-v3-generalized-sentiment-analysis") sentiment_classifier = TextClassificationPipeline(tokenizer=tokenizer, model=model) # target reviews review_list = [ '이쁘고 좋아요~~~씻기도 편하고 아이고 이쁘다고 자기방에 갖다놓고 잘써요~^^', '아직 입어보진 않았지만 굉장히 가벼워요~~ 다른 리뷰처럼 어깡이 좀 되네요ㅋ 만족합니다. 엄청 빠른발송 감사드려요 :)', '재구매 한건데 너무너무 가성비인거 같아요!! 다음에 또 생각나면 3개째 또 살듯..ㅎㅎ', '가습량이 너무 적어요. 방이 작지 않다면 무조건 큰걸로구매하세요. 물량도 조금밖에 안들어가서 쓰기도 불편함', '한번입었는데 옆에 봉제선 다 풀리고 실밥도 계속 나옵니다. 마감 처리 너무 엉망 아닌가요?', '따뜻하고 좋긴한데 배송이 느려요', '맛은 있는데 가격이 있는 편이에요' ] # predict for idx, review in enumerate(review_list): pred = sentiment_classifier(review) print(f'{review}\n>> {pred[0]}') ``` ``` 이쁘고 좋아요~~~씻기도 편하고 아이고 이쁘다고 자기방에 갖다놓고 잘써요~^^ >> {'label': '1', 'score': 0.9945501685142517} 아직 입어보진 않았지만 굉장히 가벼워요~~ 다른 리뷰처럼 어깡이 좀 되네요ㅋ 만족합니다. 엄청 빠른발송 감사드려요 :) >> {'label': '1', 'score': 0.995430588722229} 재구매 한건데 너무너무 가성비인거 같아요!! 다음에 또 생각나면 3개째 또 살듯..ㅎㅎ >> {'label': '1', 'score': 0.9959582686424255} 가습량이 너무 적어요. 방이 작지 않다면 무조건 큰걸로구매하세요. 물량도 조금밖에 안들어가서 쓰기도 불편함 >> {'label': '0', 'score': 0.9984619617462158} 한번입었는데 옆에 봉제선 다 풀리고 실밥도 계속 나옵니다. 마감 처리 너무 엉망 아닌가요? >> {'label': '0', 'score': 0.9991756677627563} 따뜻하고 좋긴한데 배송이 느려요 >> {'label': '1', 'score': 0.6473883390426636} 맛은 있는데 가격이 있는 편이에요 >> {'label': '1', 'score': 0.5128092169761658} ``` - label 0 : negative review - label 1 : positive review
1,121
jaesun/distilbert-base-uncased-finetuned-cola
[ "unacceptable", "acceptable" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.51728018358102 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8815 - Matthews Correlation: 0.5173 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5272 | 1.0 | 535 | 0.5099 | 0.4093 | | 0.3563 | 2.0 | 1070 | 0.5114 | 0.5019 | | 0.2425 | 3.0 | 1605 | 0.6696 | 0.4898 | | 0.1726 | 4.0 | 2140 | 0.7715 | 0.5123 | | 0.132 | 5.0 | 2675 | 0.8815 | 0.5173 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.14.0 - Tokenizers 0.10.3
1,124
jakelever/coronabert
[ "Clinical Reports", "Comment/Editorial", "Communication", "Contact Tracing", "Diagnostics", "Drug Targets", "Education", "Effect on Medical Specialties", "Forecasting & Modelling", "Health Policy", "Healthcare Workers", "Imaging", "Immunology", "Inequality", "Infection Reports", "Long ...
--- language: en thumbnail: https://coronacentral.ai/logo-with-name.png?1 tags: - coronavirus - covid - bionlp datasets: - cord19 - pubmed license: mit widget: - text: "Pre-existing T-cell immunity to SARS-CoV-2 in unexposed healthy controls in Ecuador, as detected with a COVID-19 Interferon-Gamma Release Assay." - text: "Lifestyle and mental health disruptions during COVID-19." - text: "More than 50 Long-term effects of COVID-19: a systematic review and meta-analysis" --- # CoronaCentral BERT Model for Topic / Article Type Classification This is the topic / article type multi-label classification for the [CoronaCentral website](https://coronacentral.ai). This forms part of the pipeline for downloading and processing coronavirus literature described in the [corona-ml repo](https://github.com/jakelever/corona-ml) with available [step-by-step descriptions](https://github.com/jakelever/corona-ml/blob/master/stepByStep.md). The method is described in the [preprint](https://doi.org/10.1101/2020.12.21.423860) and detailed performance results can be found in the [machine learning details](https://github.com/jakelever/corona-ml/blob/master/machineLearningDetails.md) document. This model was derived by fine-tuning the [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) model on this coronavirus sequence (document) classification task. ## Usage Below are two Google Colab notebooks with example usage of this sequence classification model using HuggingFace transformers and KTrain. - [HuggingFace example on Google Colab](https://colab.research.google.com/drive/1cBNgKd4o6FNWwjKXXQQsC_SaX1kOXDa4?usp=sharing) - [KTrain example on Google Colab](https://colab.research.google.com/drive/1h7oJa2NDjnBEoox0D5vwXrxiCHj3B1kU?usp=sharing) ## Training Data The model is trained on ~3200 manually-curated articles sampled at various stages during the coronavirus pandemic. The code for training is available in the [category\_prediction](https://github.com/jakelever/corona-ml/tree/master/category_prediction) directory of the main Github Repo. The data is available in the [annotated_documents.json.gz](https://github.com/jakelever/corona-ml/blob/master/category_prediction/annotated_documents.json.gz) file. ## Inputs and Outputs The model takes in a tokenized title and abstract (combined into a single string and separated by a new line). The outputs are topics and article types, broadly called categories in the pipeline code. The types are listed below. Some others are managed by hand-coded rules described in the [step-by-step descriptions](https://github.com/jakelever/corona-ml/blob/master/stepByStep.md). ### List of Article Types - Comment/Editorial - Meta-analysis - News - Review ### List of Topics - Clinical Reports - Communication - Contact Tracing - Diagnostics - Drug Targets - Education - Effect on Medical Specialties - Forecasting & Modelling - Health Policy - Healthcare Workers - Imaging - Immunology - Inequality - Infection Reports - Long Haul - Medical Devices - Misinformation - Model Systems & Tools - Molecular Biology - Non-human - Non-medical - Pediatrics - Prevalence - Prevention - Psychology - Recommendations - Risk Factors - Surveillance - Therapeutics - Transmission - Vaccines
1,128
jason9693/SoongsilBERT-base-beep
[ "hate", "none", "offensive" ]
--- language: ko widget: - text: "응 어쩔티비~" datasets: - kor_hate --- # Finetuning ## Result ### Base Model | | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :-------------------- | :---: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | KoBERT | 351M | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 | | XLM-Roberta-Base | 1.03G | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 | | HanBERT | 614M | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 | | KoELECTRA-Base-v3 | 431M | 90.63 | 88.11 | 84.45 | 82.24 | 85.53 | 95.25 | 84.83 / 93.45 | 67.61 | | Soongsil-BERT | 370M | **91.2** | - | - | - | 76 | 94 | - | **69** | ### Small Model | | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :--------------------- | :--: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | DistilKoBERT | 108M | 88.60 | 84.65 | 60.50 | 72.00 | 72.59 | 92.48 | 54.40 / 77.97 | 60.72 | | KoELECTRA-Small-v3 | 54M | 89.36 | 85.40 | 77.45 | 78.60 | 80.79 | 94.85 | 82.11 / 91.13 | 63.07 | | Soongsil-BERT | 213M | **90.7** | 84 | 69.1 | 76 | - | 92 | - | **66** | ## Reference - [Transformers Examples](https://github.com/huggingface/transformers/blob/master/examples/README.md) - [NSMC](https://github.com/e9t/nsmc) - [Naver NER Dataset](https://github.com/naver/nlp-challenge) - [PAWS](https://github.com/google-research-datasets/paws) - [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) - [Question Pair](https://github.com/songys/Question_pair) - [KorQuad](https://korquad.github.io/category/1.0_KOR.html) - [Korean Hate Speech](https://github.com/kocohub/korean-hate-speech) - [KoELECTRA](https://github.com/monologg/KoELECTRA) - [KoBERT](https://github.com/SKTBrain/KoBERT) - [HanBERT](https://github.com/tbai2019/HanBert-54k-N) - [HanBert Transformers](https://github.com/monologg/HanBert-Transformers)
1,129
jason9693/SoongsilBERT-nsmc-base
[ "부정", "긍정" ]
# Finetuning ## Result ### Base Model | | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :-------------------- | :---: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | KoBERT | 351M | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 | | XLM-Roberta-Base | 1.03G | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 | | HanBERT | 614M | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 | | KoELECTRA-Base-v3 | 431M | 90.63 | 88.11 | 84.45 | 82.24 | 85.53 | 95.25 | 84.83 / 93.45 | 67.61 | | Soongsil-BERT | 370M | **91.2** | - | - | - | 76 | 94 | - | **69** | ### Small Model | | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :--------------------- | :--: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | DistilKoBERT | 108M | 88.60 | 84.65 | 60.50 | 72.00 | 72.59 | 92.48 | 54.40 / 77.97 | 60.72 | | KoELECTRA-Small-v3 | 54M | 89.36 | 85.40 | 77.45 | 78.60 | 80.79 | 94.85 | 82.11 / 91.13 | 63.07 | | Soongsil-BERT | 213M | **90.7** | 84 | 69.1 | 76 | - | 92 | - | **66** | ## Reference - [Transformers Examples](https://github.com/huggingface/transformers/blob/master/examples/README.md) - [NSMC](https://github.com/e9t/nsmc) - [Naver NER Dataset](https://github.com/naver/nlp-challenge) - [PAWS](https://github.com/google-research-datasets/paws) - [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) - [Question Pair](https://github.com/songys/Question_pair) - [KorQuad](https://korquad.github.io/category/1.0_KOR.html) - [Korean Hate Speech](https://github.com/kocohub/korean-hate-speech) - [KoELECTRA](https://github.com/monologg/KoELECTRA) - [KoBERT](https://github.com/SKTBrain/KoBERT) - [HanBERT](https://github.com/tbai2019/HanBert-54k-N) - [HanBert Transformers](https://github.com/monologg/HanBert-Transformers)
1,130
jb2k/bert-base-multilingual-cased-language-detection
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28", "LABEL_29",...
# bert-base-multilingual-cased-language-detection A model for language detection with support for 45 languages ## Model description This model was created by fine-tuning [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the [common language](https://huggingface.co/datasets/common_language) dataset. This dataset has support for 45 languages, which are listed below: ``` Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh ``` ## Evaluation This model was evaluated on the test split of the [common language](https://huggingface.co/datasets/common_language) dataset, and achieved the following metrics: * Accuracy: 97.8%
1,147
joeddav/bart-large-mnli-yahoo-answers
[ "contradiction", "entailment", "neutral" ]
--- language: en tags: - text-classification - pytorch datasets: - yahoo-answers pipeline_tag: zero-shot-classification --- # bart-lage-mnli-yahoo-answers ## Model Description This model takes [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) and fine-tunes it on Yahoo Answers topic classification. It can be used to predict whether a topic label can be assigned to a given sequence, whether or not the label has been seen before. You can play with an interactive demo of this zero-shot technique with this model, as well as the non-finetuned [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli), [here](https://huggingface.co/zero-shot/). ## Intended Usage This model was fine-tuned on topic classification and will perform best at zero-shot topic classification. Use `hypothesis_template="This text is about {}."` as this is the template used during fine-tuning. For settings other than topic classification, you can use any model pre-trained on MNLI such as [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) with the same code as written below. #### With the zero-shot classification pipeline The model can be used with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline nlp = pipeline("zero-shot-classification", model="joeddav/bart-large-mnli-yahoo-answers") sequence_to_classify = "Who are you voting for in 2020?" candidate_labels = ["Europe", "public health", "politics", "elections"] hypothesis_template = "This text is about {}." nlp(sequence_to_classify, candidate_labels, multi_class=True, hypothesis_template=hypothesis_template) ``` #### With manual PyTorch ```python # pose sequence as a NLI premise and label as a hypothesis from transformers import BartForSequenceClassification, BartTokenizer nli_model = BartForSequenceClassification.from_pretrained('joeddav/bart-large-mnli-yahoo-answers') tokenizer = BartTokenizer.from_pretrained('joeddav/bart-large-mnli-yahoo-answers') premise = sequence hypothesis = f'This text is about {label}.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', max_length=tokenizer.max_len, truncation_strategy='only_first') logits = nli_model(x.to(device))[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (2) as the probability of the label being true entail_contradiction_logits = logits[:,[0,2]] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,1] ``` ## Training The model is a pre-trained MNLI classifier further fine-tuned on Yahoo Answers topic classification in the manner originally described in [Yin et al. 2019](https://arxiv.org/abs/1909.00161) and [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html). That is, each sequence is fed to the pre-trained NLI model in place of the premise and each candidate label as the hypothesis, formatted like so: `This text is about {class name}.` For each example in the training set, a true and a randomly-selected false label hypothesis are fed to the model which must predict which labels are valid and which are false. Since this method studies the ability to classify unseen labels after being trained on a different set of labels, the model is only trained on 5 out of the 10 labels in Yahoo Answers. These are "Society & Culture", "Health", "Computers & Internet", "Business & Finance", and "Family & Relationships". ## Evaluation Results This model was evaluated with the label-weighted F1 of the _seen_ and _unseen_ labels. That is, for each example the model must predict from one of the 10 corpus labels. The F1 is reported for the labels seen during training as well as the labels unseen during training. We found an F1 score of `.68` and `.72` for the unseen and seen labels, respectively. In order to adjust for the in-vs-out of distribution labels, we subtract a fixed amount of 30% from the normalized probabilities of the _seen_ labels, as described in [Yin et al. 2019](https://arxiv.org/abs/1909.00161) and [our blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
1,148
joeddav/distilbert-base-uncased-agnews-student
[ "business", "science/tech", "sports", "the world" ]
--- language: en tags: - text-classification - pytorch - tensorflow datasets: - ag_news license: mit widget: - text: "Armed conflict has been a near-constant policial and economic burden." - text: "Tom Brady won his seventh Super Bowl last night." - text: "Dow falls more than 100 points after disappointing jobs data" - text: "A new moon has been discovered in Jupter's orbit." --- # distilbert-base-uncased-agnews-student ## Model Description This model is distilled from the zero-shot classification pipeline on the unlabeled AG's News dataset using [this script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation). It is the result of the demo notebook [here](https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing), where more details about the model can be found. - Teacher model: [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) - Teacher hypothesis template: `"This text is about {}."` ## Intended Usage The model can be used like any other model trained on AG's News, but will likely not perform as well as a model trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model can be distilled to a more efficient student.
1,149
joeddav/distilbert-base-uncased-go-emotions-student
[ "admiration", "amusement", "anger", "annoyance", "approval", "caring", "confusion", "curiosity", "desire", "disappointment", "disapproval", "disgust", "embarrassment", "excitement", "fear", "gratitude", "grief", "joy", "love", "nervousness", "neutral", "optimism", "pride"...
--- language: en tags: - text-classification - pytorch - tensorflow datasets: - go_emotions license: mit widget: - text: "I feel lucky to be here." --- # distilbert-base-uncased-go-emotions-student ## Model Description This model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dataset using [this script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation). It was trained with mixed precision for 10 epochs and otherwise used the default script arguments. ## Intended Usage The model can be used like any other model trained on GoEmotions, but will likely not perform as well as a model trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model can be distilled to a more efficient student, allowing a classifier to be trained with only unlabeled data. Note that although the GoEmotions dataset allow multiple labels per instance, the teacher used single-label classification to create psuedo-labels.
1,151
joelito/bert-base-uncased-sem_eval_2010_task_8
[ "Cause-Effect(e1,e2)", "Cause-Effect(e2,e1)", "Component-Whole(e1,e2)", "Component-Whole(e2,e1)", "Content-Container(e1,e2)", "Content-Container(e2,e1)", "Entity-Destination(e1,e2)", "Entity-Destination(e2,e1)", "Entity-Origin(e1,e2)", "Entity-Origin(e2,e1)", "Instrument-Agency(e1,e2)", "Instr...
# bert-base-uncased-sem_eval_2010_task_8 Task: sem_eval_2010_task_8 Base Model: bert-base-uncased Trained for 3 epochs Batch-size: 6 Seed: 42 Test F1-Score: 0.8
1,152
jonc/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.923 - name: F1 type: f1 value: 0.9230733583303665 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2159 - Accuracy: 0.923 - F1: 0.9231 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8494 | 1.0 | 250 | 0.3134 | 0.907 | 0.9051 | | 0.2504 | 2.0 | 500 | 0.2159 | 0.923 | 0.9231 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,153
joniponi/bert-finetuned-sem_eval-english
[ "admin", "aides", "bathroom", "bill", "cc", "clean", "communication", "covid", "depts", "doctor", "family", "food", "health", "nice", "nurse", "rude", "stay", "visit" ]
--- Epoch Training Loss Validation Loss F1 Roc Auc Accuracy 1 0.115400 0.099458 0.888763 0.920410 0.731760 2 0.070400 0.080343 0.911700 0.943234 0.781116
1,155
joshuacalloway/csc575finalproject
[ "negative", "positive", "noimpact", "mixed" ]
1,157
jpabbuehl/sagemaker-distilbert-emotion
[ "anger", "fear", "joy", "love", "sadness", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: sagemaker-distilbert-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.929 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sagemaker-distilbert-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1446 - Accuracy: 0.929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9345 | 1.0 | 500 | 0.2509 | 0.918 | | 0.1855 | 2.0 | 1000 | 0.1626 | 0.928 | | 0.1036 | 3.0 | 1500 | 0.1446 | 0.929 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
1,158
jpcorb20/toxic-detector-distilroberta
[ "toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate" ]
# Distilroberta for toxic comment detection See my GitHub repo [toxic-comment-server](https://github.com/jpcorb20/toxic-comment-server) The model was trained from [DistilRoberta](https://huggingface.co/distilroberta-base) on [Kaggle Toxic Comments](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) with the BCEWithLogits loss for Multi-Label prediction. Thus, please use the sigmoid activation on the logits (not made to use the softmax output, e.g. like the HF widget). ## Evaluation F1 scores: toxic: 0.72 severe_toxic: 0.38 obscene: 0.72 threat: 0.52 insult: 0.69 identity_hate: 0.60 Macro-F1: 0.61
1,160
julien-c/distilbert-sagemaker-1609802168
[ "neg", "pos" ]
--- tags: - sagemaker datasets: - imdb --- ## distilbert-sagemaker-1609802168 Trained from SageMaker HuggingFace extension. Fine-tuned from [distilbert-base-uncased](/distilbert-base-uncased) on [imdb](/datasets/imdb) 🔥 #### Eval | key | value | | --- | ----- | | eval_loss | 0.19187863171100616 | | eval_accuracy | 0.9259 | | eval_f1 | 0.9272173656811707 | | eval_precision | 0.9147286821705426 | | eval_recall | 0.9400517825134436 | | epoch | 1.0 |
1,161
julien-c/reactiongif-roberta
[ "agree", "applause", "awww", "dance", "deal_with_it", "do_not_want", "eww", "eye_roll", "facepalm", "fist_bump", "good_luck", "happy_dance", "hearts", "high_five", "hug", "idk", "kiss", "mic_drop", "no", "oh_snap", "ok", "omg", "oops", "please", "popcorn", "scared",...
--- license: apache-2.0 tags: - generated-from-trainer datasets: - julien-c/reactiongif metrics: - accuracy model-index: - name: model results: - task: name: Text Classification type: text-classification metrics: - name: Accuracy type: accuracy value: 0.2662102282047272 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 2.9150 - Accuracy: 0.2662 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.0528 | 0.44 | 1000 | 3.0265 | 0.2223 | | 2.9836 | 0.89 | 2000 | 2.9263 | 0.2332 | | 2.7409 | 1.33 | 3000 | 2.9041 | 0.2533 | | 2.7905 | 1.77 | 4000 | 2.8763 | 0.2606 | | 2.4359 | 2.22 | 5000 | 2.9072 | 0.2642 | | 2.4507 | 2.66 | 6000 | 2.9230 | 0.2644 | ### Framework versions - Transformers 4.7.0.dev0 - Pytorch 1.8.1+cu102 - Datasets 1.8.0 - Tokenizers 0.10.3
1,162
juliensimon/autonlp-imdb-demo-hf-16622767
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - juliensimon/autonlp-data-imdb-demo-hf --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 16622767 ## Validation Metrics - Loss: 0.20029613375663757 - Accuracy: 0.9256 - Precision: 0.9090909090909091 - Recall: 0.9466984884645983 - AUC: 0.979257749523025 - F1: 0.9275136399064692 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-imdb-demo-hf-16622767 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622767", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622767", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,163
juliensimon/autonlp-imdb-demo-hf-16622775
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - juliensimon/autonlp-data-imdb-demo-hf --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 16622775 ## Validation Metrics - Loss: 0.18653589487075806 - Accuracy: 0.9408 - Precision: 0.9537643207855974 - Recall: 0.9272076372315036 - AUC: 0.985847396174344 - F1: 0.9402985074626865 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-imdb-demo-hf-16622775 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,164
juliensimon/autonlp-song-lyrics-18753417
[ "Dance", "Heavy Metal", "Hip Hop", "Indie", "Pop", "Rock" ]
--- tags: - autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - juliensimon/autonlp-data-song-lyrics co2_eq_emissions: 112.75546781635975 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 18753417 - CO2 Emissions (in grams): 112.75546781635975 ## Validation Metrics - Loss: 0.9065971970558167 - Accuracy: 0.6680274633512711 - Macro F1: 0.5384854358272774 - Micro F1: 0.6680274633512711 - Weighted F1: 0.6414749238882866 - Macro Precision: 0.6744495173269196 - Micro Precision: 0.6680274633512711 - Weighted Precision: 0.6634090047492259 - Macro Recall: 0.5078466493896978 - Micro Recall: 0.6680274633512711 - Weighted Recall: 0.6680274633512711 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-song-lyrics-18753417 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-song-lyrics-18753417", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-song-lyrics-18753417", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,165
juliensimon/autonlp-song-lyrics-18753423
[ "Dance", "Heavy Metal", "Hip Hop", "Indie", "Pop", "Rock" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - juliensimon/autonlp-data-song-lyrics co2_eq_emissions: 55.552987716859484 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 18753423 - CO2 Emissions (in grams): 55.552987716859484 ## Validation Metrics - Loss: 0.913820743560791 - Accuracy: 0.654110224531453 - Macro F1: 0.5327761649415296 - Micro F1: 0.654110224531453 - Weighted F1: 0.6339481529454227 - Macro Precision: 0.6799297267808116 - Micro Precision: 0.654110224531453 - Weighted Precision: 0.6533459269990771 - Macro Recall: 0.49907494605289154 - Micro Recall: 0.654110224531453 - Weighted Recall: 0.654110224531453 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-song-lyrics-18753423 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-song-lyrics-18753423", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-song-lyrics-18753423", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,167
junzai/demo
[ "equivalent", "not_equivalent" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: bert_finetuning_test results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8284313725490197 - name: F1 type: f1 value: 0.8817567567567567 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_finetuning_test This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.4023 - Accuracy: 0.8284 - F1: 0.8818 - Combined Score: 0.8551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.11.0
1,168
junzai/demotest
[ "equivalent", "not_equivalent" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: bert_finetuning_test results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8284313725490197 - name: F1 type: f1 value: 0.8817567567567567 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_finetuning_test This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.4023 - Accuracy: 0.8284 - F1: 0.8818 - Combined Score: 0.8551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.11.0
1,169
justin871030/bert-base-uncased-goemotions-ekman-finetuned
[ "anger", "disgust", "fear", "joy", "neutral", "sadness", "surprise" ]
--- language: en tags: - go-emotion - text-classification - pytorch datasets: - go_emotions metrics: - f1 widget: - text: "Thanks for giving advice to the people who need it! 👌🙏" license: mit --- ## Model Description 1. Based on the uncased BERT pretrained model with a linear output layer. 2. Added several commonly-used emoji and tokens to the special token list of the tokenizer. 3. Did label smoothing while training. 4. Used weighted loss and focal loss to help the cases which trained badly.
1,170
justin871030/bert-base-uncased-goemotions-group-finetuned
[ "ambiguous", "negative", "neutral", "positive" ]
--- language: en tags: - go-emotion - text-classification - pytorch datasets: - go_emotions metrics: - f1 widget: - text: "Thanks for giving advice to the people who need it! 👌🙏" license: mit --- ## Model Description 1. Based on the uncased BERT pretrained model with a linear output layer. 2. Added several commonly-used emoji and tokens to the special token list of the tokenizer. 3. Did label smoothing while training. 4. Used weighted loss and focal loss to help the cases which trained badly. ## Results Best Result of `Macro F1` - 70% ## Tutorial Link - [GitHub](https://github.com/justin871030/GoEmotions)
1,171
justin871030/bert-base-uncased-goemotions-original-finetuned
[ "admiration", "amusement", "anger", "annoyance", "approval", "caring", "confusion", "curiosity", "desire", "disappointment", "disapproval", "disgust", "embarrassment", "excitement", "fear", "gratitude", "grief", "joy", "love", "nervousness", "neutral", "optimism", "pride"...
--- language: en tags: - go-emotion - text-classification - pytorch datasets: - go_emotions metrics: - f1 widget: - text: "Thanks for giving advice to the people who need it! 👌🙏" license: mit --- ## Model Description 1. Based on the uncased BERT pretrained model with a linear output layer. 2. Added several commonly-used emoji and tokens to the special token list of the tokenizer. 3. Did label smoothing while training. 4. Used weighted loss and focal loss to help the cases which trained badly. ## Results Best Result of `Macro F1` - 53% ## Tutorial Link - [GitHub](https://github.com/justin871030/GoEmotions)
1,172
justinqbui/bertweet-covid-vaccine-tweets-finetuned
[ "false", "misleading", "true" ]
--- tags: model-index: - name: bertweet-covid--vaccine-tweets-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets This model is a fine-tuned version of [justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets) which was finetuned by using [this google fact check](https://huggingface.co/datasets/justinqbui/covid_fact_checked_google_api) ~3k dataset size and webscraped data from [polifact covid info](https://huggingface.co/datasets/justinqbui/covid_fact_checked_polifact) ~ 1200 dataset size and ~1200 tweets pulled from the CDC with tweets containing the words covid or vaccine. It achieves the following results on the evaluation set (20% from the dataset randomly shuffled and selected to serve as a test set): - Validation Loss: 0.267367 - Accuracy: 91.1370% To use the model, use the inference API. Alternatively, to run locally ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("justinqbui/bertweet-covid-vaccine-tweets-finetuned") model = AutoModelForSequenceClassification.from_pretrained("justinqbui/bertweet-covid-vaccine-tweets-finetuned") ``` ## Model description This model is a fine-tuned version of pretrained version [justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets). Click on [this](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets) to see how the pre-training was done. This model was fine-tuned with a dataset of ~5500. A web scraper was used to scrape polifact and a script was used to pull from the google fact check API. Because ~80% of both these datasets were either false or misleading, I pulled about ~1200 tweets from the CDC related to covid and labelled them as true. ~30% of this dataset is considered true and the rest false or misleading. Please see the published datasets above for more detailed information. The tokenizer requires the emoji library to be installed. ``` !pip install nltk emoji ``` ## Intended uses & limitations The intended use of this model is to detect if the contents of a covid tweet is potentially false or misleading. This model is not an end all be all. It has many limitations. For example, if someone makes a post containing an image, but has attached a satirical image, this model would not be able to distinguish this. If a user links a website, the tokenizer allocates a special token for links, meaning the contents of the linked website is completely lost. If someone tweets a reply, this model can't look at the parent tweets, and will lack context. This model's dataset relies on the crowd-sourcing annotations being accurate. This data is only accurate of up until early December 2021. For example, it probably wouldn't do very ell with tweets regarded the new omicron variant. Example true inputs: ``` Covid vaccines are safe and effective. -> 97% true Vaccinations are safe and help prevent covid. -> 97% true ``` Example false inputs: ``` Covid vaccines will kill you. -> 97% false covid vaccines make you infertile. -> 97% false ``` ## Training and evaluation data This model was finetuned by using [this google fact check](https://huggingface.co/datasets/justinqbui/covid_fact_checked_google_api) ~3k dataset size and webscraped data from [polifact covid info](https://huggingface.co/datasets/justinqbui/covid_fact_checked_polifact) ~ 1200 dataset size and ~1200 tweets pulled from the CDC with tweets containing the words covid or vaccine. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-5 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - ### Training results | Training Loss | Epoch | Validation Loss | Accuracy | |:-------------:|:-----:|:---------------:|:--------:| | 0.435500 | 1.0 | 0.401900 | 0.906893 | | 0.309700 | 2.0 | 0.265500 | 0.907789 | | 0.266200 | 3.0 | 0.216500 | 0.911370 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,173
jwuthri/autonlp-shipping_status_2-27366103
[ "0", "1" ]
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - jwuthri/autonlp-data-shipping_status_2 co2_eq_emissions: 32.912881644048 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 27366103 - CO2 Emissions (in grams): 32.912881644048 ## Validation Metrics - Loss: 0.18175844848155975 - Accuracy: 0.9437683592110785 - Precision: 0.9416809605488851 - Recall: 0.8459167950693375 - AUC: 0.9815242330050846 - F1: 0.8912337662337663 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/jwuthri/autonlp-shipping_status_2-27366103 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,174
jx88/xlm-roberta-base-finetuned-marc-en-j-run
[ "good", "great", "ok", "poor", "terrible" ]
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en-j-run results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en-j-run This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9189 - Mae: 0.4634 ## Model description Trained following the MLT Tokyo Transformers workshop run by huggingface. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.2327 | 1.0 | 235 | 1.0526 | 0.6341 | | 0.9943 | 2.0 | 470 | 0.9189 | 0.4634 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
1,176
k-partha/curiosity_bert_bio
[ "Sensing", "Intuitive" ]
Labels Twitter biographies on [Openness](https://en.wikipedia.org/wiki/Openness_to_experience), strongly related to intellectual curiosity. Intuitive: Associated with higher intellectual curiosity Sensing: Associated with lower intellectual curiosity Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit! Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun! Note: Performance on inputs other than Twitter biographies [the training data source] is not verified. For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
1,177
k-partha/decision_bert_bio
[ "Feeling", "Thinking" ]
Rates Twitter biographies on decision-making preference: Thinking or Feeling. Roughly corresponds to [agreeableness.](https://en.wikipedia.org/wiki/Agreeableness) Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit! Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Remember that models employ pure statistical reasoning (and may consequently make no sense sometimes.) Have fun! Note: Performance on inputs other than Twitter biographies [the training data source] is not verified. For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
1,178
k-partha/decision_style_bert_bio
[ "Prospecting", "Judging" ]
Rates Twitter biographies on decision-making preference: Judging (focused, goal-oriented decision strategy) or Prospecting (open-ended, explorative strategy). Roughly corresponds to [conscientiousness](https://en.wikipedia.org/wiki/Conscientiousness) Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit! Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun! Note: Performance on inputs other than Twitter biographies [the training data source] is not verified. For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
1,179
k-partha/extrabert_bio
[ "Introvert", "Extravert" ]
Classifies Twitter biographies as either introverts or extroverts. Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit! Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun! Barack Obama: Extrovert; Ellen DeGeneres: Extrovert; Naomi Osaka: Introvert Note: Performance on inputs other than Twitter biographies [the training data source] is not verified. For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
1,180
kaixinwang/NLP
[ "NEGATIVE", "POSITIVE" ]
--- language: - "Python" thumbnail: "url to a thumbnail used in social sharing" tags: - "sentiment analysis" - "STEM" - "text classification" --- Welcome! This is the model built for the sentiment analysis on the STEM course reviews at UCLA. - Author: Kaixin Wang - Email: kaixinwang@g.ucla.edu - Time Updated: March 2022
1,181
kamivao/autonlp-cola_gram-208681
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - kamivao/autonlp-data-cola_gram --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 208681 ## Validation Metrics - Loss: 0.37569838762283325 - Accuracy: 0.8365019011406845 - Precision: 0.8398058252427184 - Recall: 0.9453551912568307 - AUC: 0.9048838797814208 - F1: 0.8894601542416453 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kamivao/autonlp-cola_gram-208681 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("kamivao/autonlp-cola_gram-208681", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("kamivao/autonlp-cola_gram-208681", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,182
kamivao/autonlp-entity_selection-5771228
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - kamivao/autonlp-data-entity_selection --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 5771228 ## Validation Metrics - Loss: 0.17127291858196259 - Accuracy: 0.9206671174216813 - Precision: 0.9588885738588036 - Recall: 0.9423237670660352 - AUC: 0.9720189638675828 - F1: 0.9505340078695896 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kamivao/autonlp-entity_selection-5771228 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("kamivao/autonlp-entity_selection-5771228", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("kamivao/autonlp-entity_selection-5771228", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,183
kangnichaluo/cb
[ "LABEL_0", "LABEL_1" ]
learning rate: 5e-5 training epochs: 5 batch size: 8 seed: 42 model: bert-base-uncased trained on CB which is converted into two-way nli classification (predict entailment or not-entailment class)
1,184
kangnichaluo/mnli-1
[ "LABEL_0", "LABEL_1" ]
learning rate: 2e-5 training epochs: 3 batch size: 64 seed: 42 model: bert-base-uncased trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
1,185
kangnichaluo/mnli-2
[ "LABEL_0", "LABEL_1" ]
learning rate: 3e-5 training epochs: 3 batch size: 64 seed: 0 model: bert-base-uncased trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
1,186
kangnichaluo/mnli-3
[ "LABEL_0", "LABEL_1" ]
learning rate: 2e-5 training epochs: 3 batch size: 64 seed: 13 model: bert-base-uncased trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
1,187
kangnichaluo/mnli-4
[ "LABEL_0", "LABEL_1" ]
learning rate: 2e-5 training epochs: 3 batch size: 64 seed: 87 model: bert-base-uncased trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
1,188
kangnichaluo/mnli-5
[ "LABEL_0", "LABEL_1" ]
learning rate: 2e-5 training epochs: 3 batch size: 64 seed: 111 model: bert-base-uncased trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
1,189
kangnichaluo/mnli-cb
[ "LABEL_0", "LABEL_1" ]
learning rate: 3e-5 training epochs: 5 batch size: 8 seed: 42 model: bert-base-uncased The model is pretrained on MNLI (we use kangnichaluo/mnli-2 directly) and then finetuned on CB which is converted into two-way nli classification (predict entailment or not-entailment class)
1,190
kapilchauhan/bert-base-uncased-CoLA-finetuned-cola
[ "unacceptable", "acceptable" ]
--- tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-CoLA-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5755298089385917 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-CoLA-finetuned-cola This model is a fine-tuned version of [textattack/bert-base-uncased-CoLA](https://huggingface.co/textattack/bert-base-uncased-CoLA) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8318 - Matthews Correlation: 0.5755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.2949 | 1.0 | 535 | 0.5742 | 0.5219 | | 0.1852 | 2.0 | 1070 | 0.7226 | 0.5573 | | 0.1196 | 3.0 | 1605 | 0.8318 | 0.5755 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
1,192
kapilchauhan/distilbert-base-uncased-finetuned-cola
[ "unacceptable", "acceptable" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5135743708561838 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7696 - Matthews Correlation: 0.5136 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5284 | 1.0 | 535 | 0.4948 | 0.4093 | | 0.3529 | 2.0 | 1070 | 0.5135 | 0.4942 | | 0.2417 | 3.0 | 1605 | 0.6303 | 0.5083 | | 0.1818 | 4.0 | 2140 | 0.7696 | 0.5136 | | 0.1302 | 5.0 | 2675 | 0.8774 | 0.5123 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
1,193
kco4776/soongsil-bert-wellness
[ "감정", "내원이유", "모호함", "배경", "부가설명", "상태", "원인", "일반대화", "자가치료", "증상", "치료이력", "현재상태" ]
## References - [Soongsil-BERT](https://github.com/jason9693/Soongsil-BERT)
1,198
khalidalt/DeBERTa-v3-large-mnli
[ "contradiction", "entailment", "neutral" ]
--- language: - en tags: - text-classification - zero-shot-classification metrics: - accuracy widget: - text: "The Movie have been criticized for the story. However, I think it is a great movie. [SEP] I liked the movie." --- # DeBERTa-v3-large-mnli ## Model description This model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information. The model used is [DeBERTa-v3-large from Microsoft](https://huggingface.co/microsoft/deberta-large). The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on [official repository](https://github.com/microsoft/DeBERTa) and the [paper](https://arxiv.org/abs/2006.03654) ## Intended uses & limitations #### How to use the model ```python premise = "The Movie have been criticized for the story. However, I think it is a great movie." hypothesis = "I liked the movie." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1) label_names = ["entailment", "neutral", "contradiction"] print(label_names[prediction.argmax(0).tolist()]) ``` ### Training data This model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement. ### Training procedure DeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters. ``` train_args = TrainingArguments( learning_rate=2e-5, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=3, warmup_ratio=0.06, weight_decay=0.1, fp16=True, seed=42, ) ``` ### BibTeX entry and citation info Please cite the [DeBERTa paper](https://arxiv.org/abs/2006.03654) and [MultiNLI Dataset](https://cims.nyu.edu/~sbowman/multinli/paper.pdf) if you use this model and include this Huggingface hub.
1,201
kingla6/distilbert-magazine-classifier
[ "engineering", "humanities", "prelaw", "premed", "science" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall model-index: - name: distilbert-magazine-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-magazine-classifier This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8377 - Precision: 0.25 - Recall: 0.125 - Fscore: 0.1667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.1779 | 1.0 | 2 | 1.7584 | 0.2222 | 0.3333 | 0.2667 | | 0.1635 | 2.0 | 4 | 1.7585 | 0.25 | 0.125 | 0.1667 | | 0.1405 | 3.0 | 6 | 1.8377 | 0.25 | 0.125 | 0.1667 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
1,202
kinit/slovakbert-sentiment-twitter
[ "-1", "0", "1" ]
--- language: - sk tags: - twitter - sentiment-analysis license: cc metrics: - f1 widget: - text: "Najkrajšia vianočná reklama: Toto milé video vám vykúzli čarovnú atmosféru: Vianoce sa nezadržateľne blížia." - text: "A opäť sa objavili nebezpečné výrobky. Pozrite sa, či ich nemáte doma" --- # Sentiment Analysis model based on SlovakBERT This is a sentiment analysis classifier based on [SlovakBERT](https://huggingface.co/gerulata/slovakbert). The model can distinguish three level of sentiment: - `-1` - Negative sentiment - `0` - Neutral sentiment - `1` - Positive setiment The model was fine-tuned using Slovak part of [Multilingual Twitter Sentiment Analysis Dataset](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0155036) [Mozetič et al 2016] containing 50k manually annotated Slovak tweets. As such, it is fine-tuned for tweets and it is not advised to use the model for general-purpose sentiment analysis. ## Results The model was evaluated in [our paper](https://arxiv.org/abs/2109.15254) [Pikuliak et al 2021, Section 4.4]. It achieves \\(0.67\\) F1-score on the original dataset and \\(0.58\\) F1-score on general reviews dataset. ## Cite ``` @inproceedings{pikuliak-etal-2022-slovakbert, title = "{S}lovak{BERT}: {S}lovak Masked Language Model", author = "Pikuliak, Mat{\'u}{\v{s}} and Grivalsk{\'y}, {\v{S}}tefan and Kon{\^o}pka, Martin and Bl{\v{s}}t{\'a}k, Miroslav and Tamajka, Martin and Bachrat{\'y}, Viktor and Simko, Marian and Bal{\'a}{\v{z}}ik, Pavol and Trnka, Michal and Uhl{\'a}rik, Filip", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-emnlp.530", pages = "7156--7168", abstract = "We introduce a new Slovak masked language model called \textit{SlovakBERT}. This is to our best knowledge the first paper discussing Slovak transformers-based language models. We evaluate our model on several NLP tasks and achieve state-of-the-art results. This evaluation is likewise the first attempt to establish a benchmark for Slovak language models. We publish the masked language model, as well as the fine-tuned models for part-of-speech tagging, sentiment analysis and semantic textual similarity.", } ```
1,203
kittinan/exercise-feedback-classification
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
# Reddit exercise feedback classification Model to classify Reddit's comments for exercise feedback. Current classes are good, correction, bad posture, not informative. If you want to use it locally, ### Usage: ```py from transformers import pipeline classifier = pipeline("text-classification", "kittinan/exercise-feedback-classification") classifier("search for alan thrall deadlift video he will explain basic ques") #[{'label': 'correction', 'score': 0.9998193979263306}] ```
1,204
kloon99/KML_Software_License_v1
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8" ]
{'C0': 'audit_rights', 'C1': 'licensee_indemnity', 'C2': 'licensor_indemnity', 'C3': 'license_grant', 'C4': 'eula_others', 'C5': 'licensee_infringement_indemnity', 'C6': 'licensor_exemption_liability', 'C7': 'licensor_limit_liabilty', 'C8': 'software_warranty'}
1,205
kornosk/bert-election2020-twitter-stance-biden-KE-MLM
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: "en" tags: - twitter - stance-detection - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (KE-MLM) Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden. # Usage This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden-KE-MLM" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path) id2label = { 0: "AGAINST", 1: "FAVOR", 2: "NONE" } ##### Prediction Neutral ##### sentence = "Hello World." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Favor ##### sentence = "Go Go Biden!!!" inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Against ##### sentence = "Biden is the worst." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) # please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
1,206
kornosk/bert-election2020-twitter-stance-biden
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: "en" tags: - twitter - stance-detection - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (f-BERT) Pre-trained weights for **f-BERT** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden. # Usage This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path) id2label = { 0: "AGAINST", 1: "FAVOR", 2: "NONE" } ##### Prediction Neutral ##### sentence = "Hello World." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Favor ##### sentence = "Go Go Biden!!!" inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Against ##### sentence = "Biden is the worst." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) # please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
1,207
kornosk/bert-election2020-twitter-stance-trump-KE-MLM
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: "en" tags: - twitter - stance-detection - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (KE-MLM) Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Donald Trump. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump. # Usage This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-trump-KE-MLM" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path) id2label = { 0: "AGAINST", 1: "FAVOR", 2: "NONE" } ##### Prediction Neutral ##### sentence = "Hello World." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Favor ##### sentence = "Go Go Trump!!!" inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Against ##### sentence = "Trump is the worst." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) # please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
1,208
kornosk/bert-election2020-twitter-stance-trump
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: "en" tags: - twitter - stance-detection - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (f-BERT) Pre-trained weights for **f-BERT** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Donald Trump. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump. # Usage This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-trump" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path) id2label = { 0: "AGAINST", 1: "FAVOR", 2: "NONE" } ##### Prediction Neutral ##### sentence = "Hello World." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Favor ##### sentence = "Go Go Trump!!!" inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Against ##### sentence = "Trump is the worst." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) # please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
1,210
kurianbenoy/distilbert-base-uncased-finetuned-imdb
[ "neg", "pos" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.923 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3073 - Accuracy: 0.923 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2744 | 1.0 | 1563 | 0.2049 | 0.921 | | 0.1572 | 2.0 | 3126 | 0.2308 | 0.923 | | 0.0917 | 3.0 | 4689 | 0.3073 | 0.923 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
1,211
kurianbenoy/distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93032 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2165 - Accuracy: 0.9303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2749 | 1.0 | 3125 | 0.2165 | 0.9303 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
1,212
l3cube-pune/MarathiSentiment
[ "Negative", "Neutral", "Positive" ]
--- language: mr tags: - albert license: cc-by-4.0 datasets: - L3CubeMahaSent widget: - text: "I like you. </s></s> I love you." --- ## MarathiSentiment MarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset. [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (http://arxiv.org/abs/2103.11408) ``` @inproceedings{kulkarni2021l3cubemahasent, title={L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset}, author={Kulkarni, Atharva and Mandhane, Meet and Likhitkar, Manali and Kshirsagar, Gayatri and Joshi, Raviraj}, booktitle={Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis}, pages={213--220}, year={2021} } ```
1,214
l3cube-pune/hate-multi-roberta-hasoc-hindi
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- language: hi tags: - roberta license: cc-by-4.0 datasets: - HASOC 2021 widget: - text: "I like you. </s></s> I love you." --- ## hate-roberta-hasoc-hindi hate-roberta-hasoc-hindi is a multi-class hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021. The label mappings are 0 -> None, 1 -> Offensive, 2 -> Hate, 3 -> Profane. More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200) ``` @article{velankar2021hate, title={Hate and Offensive Speech Detection in Hindi and Marathi}, author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj}, journal={arXiv preprint arXiv:2110.12200}, year={2021} } ```
1,216
laboro-ai/distilbert-base-japanese-finetuned-livedoor
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8" ]
--- language: ja tags: - distilbert license: cc-by-nc-4.0 ---
1,217
lamhieu/distilbert-base-multilingual-cased-vietnamese-topicifier
[ "0", "100 metres", "A Song of Ice and Fire", "A Tale for the Time Being", "ARM Holdings", "Abigail Johnson", "Abiogenesis", "Abortion", "Abraham Lincoln", "Abstract art", "Abu Nuwas", "Academic degree", "Accent (sociolinguistics)", "Achaemenid Empire", "Acid-base reaction", "Acoustic g...
--- language: - vi tags: - vietnamese - topicifier - multilingual - tiny license: - mit pipeline_tag: text-classification widget: - text: "Đam mê của tôi là nhiếp ảnh" --- # distilbert-base-multilingual-cased-vietnamese-topicifier ## About Fine-tuning from `distilbert-base-multilingual-cased` with a tiny dataset about Vietnamese topics. ## Usage Try entering a message to predict what topic is being discussed. For example: ``` # Photography Đam mê của tôi là nhiếp ảnh # World War I Bạn đã từng nghe về cuộc đại thế chiến ? ``` ## Other The model was fine-tuning with a tiny dataset, don't use it for a product.
1,218
lannelin/bert-imdb-1hidden
[ "neg", "pos" ]
--- language: - en datasets: - imdb metrics: - accuracy --- # bert-imdb-1hidden ## Model description A `bert-base-uncased` model was restricted to 1 hidden layer and fine-tuned for sequence classification on the imdb dataset loaded using the `datasets` library. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification pretrained = "lannelin/bert-imdb-1hidden" tokenizer = AutoTokenizer.from_pretrained(pretrained) model = AutoModelForSequenceClassification.from_pretrained(pretrained) LABELS = ["negative", "positive"] def get_sentiment(text: str): inputs = tokenizer.encode_plus(text, return_tensors='pt') output = model(**inputs)[0].squeeze() return LABELS[(output.argmax())] print(get_sentiment("What a terrible film!")) ``` #### Limitations and bias No special consideration given to limitations and bias. Any bias held by the imdb dataset may be reflected in the model's output. ## Training data Initialised with [bert-base-uncased](https://huggingface.co/bert-base-uncased) Fine tuned on [imdb](https://huggingface.co/datasets/imdb) ## Training procedure The model was fine-tuned for 1 epoch with a batch size of 64, a learning rate of 5e-5, and a maximum sequence length of 512. ## Eval results Accuracy on imdb test set: 0.87132
1,219
larskjeldgaard/senda
[ "negativ", "neutral", "positiv" ]
--- language: da tags: - danish - bert - sentiment - polarity license: cc-by-4.0 widget: - text: "Sikke en dejlig dag det er i dag" --- # Danish BERT fine-tuned for Sentiment Analysis (Polarity) This model detects polarity ('positive', 'neutral', 'negative') of danish texts. It is trained and tested on Tweets annotated by [Alexandra Institute](https://github.com/alexandrainst). Here is an example on how to load the model in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("larskjeldgaard/senda") model = AutoModelForSequenceClassification.from_pretrained("larskjeldgaard/senda") # create 'senda' sentiment analysis pipeline senda_pipeline = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) senda_pipeline("Sikke en dejlig dag det er i dag") ```
1,220
laurauzcategui/xlm-roberta-base-finetuned-marc-en
[ "good", "great", "ok", "poor", "terrible" ]
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8945 - Mae: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:---:| | 1.1411 | 1.0 | 235 | 0.9358 | 0.5 | | 0.9653 | 2.0 | 470 | 0.8945 | 0.5 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
1,221
leetdavid/celera_relevance
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: celera_relevance results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # celera_relevance This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3072 - Train Sparse Categorical Accuracy: 0.8813 - Validation Loss: 0.4371 - Validation Sparse Categorical Accuracy: 0.8295 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.4060 | 0.8274 | 0.3665 | 0.8440 | 0 | | 0.3388 | 0.8594 | 0.3639 | 0.8585 | 1 | | 0.3072 | 0.8813 | 0.4371 | 0.8295 | 2 | ### Framework versions - Transformers 4.16.0 - TensorFlow 2.7.0 - Datasets 1.18.1 - Tokenizers 0.11.0
1,222
leetdavid/importance_model
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4" ]
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: importance_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # importance_model This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4867 - Train Sparse Categorical Accuracy: 0.8389 - Validation Loss: 0.6060 - Validation Sparse Categorical Accuracy: 0.8016 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.7037 | 0.7614 | 0.6077 | 0.7964 | 0 | | 0.5683 | 0.8120 | 0.5615 | 0.8106 | 1 | | 0.4867 | 0.8389 | 0.6060 | 0.8016 | 2 | ### Framework versions - Transformers 4.16.0 - TensorFlow 2.7.0 - Datasets 1.18.1 - Tokenizers 0.11.0
1,223
leetdavid/market_positivity
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: market_positivity results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # market_positivity This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4959 - Train Sparse Categorical Accuracy: 0.8060 - Validation Loss: 0.4484 - Validation Sparse Categorical Accuracy: 0.8187 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.6595 | 0.7184 | 0.5732 | 0.7479 | 0 | | 0.4959 | 0.8060 | 0.4484 | 0.8187 | 1 | ### Framework versions - Transformers 4.16.0 - TensorFlow 2.7.0 - Datasets 1.18.1 - Tokenizers 0.11.0
1,224
leetdavid/market_positivity_model
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: market_positivity_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # market_positivity_model This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5776 - Train Sparse Categorical Accuracy: 0.7278 - Validation Loss: 0.6460 - Validation Sparse Categorical Accuracy: 0.6859 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.7207 | 0.6394 | 0.6930 | 0.6811 | 0 | | 0.6253 | 0.7033 | 0.6549 | 0.6872 | 1 | | 0.5776 | 0.7278 | 0.6460 | 0.6859 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
1,225
leetdavid/relevance-model
[ "LABEL_0" ]
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: relevance-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # relevance-model This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3134 - Train Binary Accuracy: 0.8773 - Validation Loss: 0.3633 - Validation Binary Accuracy: 0.8541 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Binary Accuracy | Validation Loss | Validation Binary Accuracy | Epoch | |:----------:|:---------------------:|:---------------:|:--------------------------:|:-----:| | 0.3980 | 0.8289 | 0.3739 | 0.8541 | 0 | | 0.3446 | 0.8606 | 0.3614 | 0.8505 | 1 | | 0.3134 | 0.8773 | 0.3633 | 0.8541 | 2 | ### Framework versions - Transformers 4.16.0 - TensorFlow 2.7.0 - Datasets 1.18.1 - Tokenizers 0.11.0
1,227
lewiswatson/distilbert-base-uncased-finetuned-emotion
[ "sadness", "joy", "love", "anger", "fear", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: default metrics: - type: accuracy value: 0.918 name: Accuracy - type: f1 value: 0.9182094401352938 name: F1 - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: default split: test metrics: - type: accuracy value: 0.9185 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFmYmNlNzU0NzNlMGU4NDI1ZjAyMzRjY2U4NzZkMjVkNmM5Zjk2ZGNmNjBiZmY0YjY1Zjg3MzViMmRlMmRiOSIsInZlcnNpb24iOjF9.7VJ4JGkOHZ7jp_hA9Jx0ToQ74OBp918a1OVZ3qpuv1ZV1qkPrCVW9_g72v0QjmICdlHvHrBwvKywdzv-It6RCg - type: precision value: 0.8948630809230339 name: Precision Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDRhYjBjYzViMGY2MjE4OGU2OWZlYTUzNDljMjllYTAyMGI4Y2FhODQxOWU2N2NkNTYyOGJhZjA4MmFkOWFiOCIsInZlcnNpb24iOjF9.0rf2OHpdMViVl-vFQIE0g5qFmpvSfWa1Igs9Ala_T0foNk1rD4IR_bLDHqbU57HWDDYFKK2EKfV9KK19-pONBg - type: precision value: 0.9185 name: Precision Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTM0YjhmZDVhYTlhZWQ3ZGQwOTRjNGI0NTU0OTFlZjFlMTE5ODQwY2E2ZTZhZmMxYjA5NDc0MzgxMjFkZjNmMyIsInZlcnNpb24iOjF9.n1LvyMO5EkZ5H7zkB533gP8w7FMpv8TxgaeaqiM-fAHmrMsF_-Dkc0X5tjI5_QQGU2aqXOHdThmWI1ohelJoDw - type: precision value: 0.9190547804558933 name: Precision Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmM5NDVmMDcwZjVhYWIyNTI1Njk3M2Y4ZDg0N2Q5NzU2NTU3YmZlNjEzNjcyY2VmODhhMWY5MGExZjViMjMzYSIsInZlcnNpb24iOjF9.gAvnEt3NSkc5Mp0JhezC6pfsa2nXVcvD-3dfFcRy_F4S-iv8u-WjC2sj5S3ieYmw5zZlgFVLiWj3N9WclLceBg - type: recall value: 0.860108882009274 name: Recall Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDQ3ZjM3NGM4NzVjYzVkMWYxNmMzYjM1ODVhOWMwODk2NmE3NjcwNDRhMmQ0YTQ1NzdkNTNkZTEwYTBhMmIyYyIsInZlcnNpb24iOjF9.niXajj933x2yuG_AorT3Yp7_60MgHy-eXkwpjp1ERCknWcxJ5BB38-tJdP9ambP3QeGJYtjPlXVeQLpaQ7rdAw - type: recall value: 0.9185 name: Recall Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjE4OWM0OTVkMDllN2JiNjUxMWNlOWUzNTNkYmU3M2U1YzIyODBkNjk5YTBhMmFmMzM5Mzk1NjRkNjRmMjUxZSIsInZlcnNpb24iOjF9.S0di5PwvB-9NpPh6d1VOBUZOqIxVdyfPeUIc5NCTZ6-hc4NrWyAsrs_-3ybbhnws6ZqgQh8S-oCLPj142J0LCA - type: recall value: 0.9185 name: Recall Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWI0Zjc5ZGIwMzdhMzRiYjgxYzcxZWVjMDczZTcxMWZkYTljOTI0MjVkOTU0MDdiNDYzMjkyNThmNmUwMWQxYSIsInZlcnNpb24iOjF9.fdOWpzsUjzuC_jL4Iy4AY-gloMO3_cuxwvFs-2ViJU4RLn7xnJNqdID5hyuoSlytpYyk8yf0J8tImddj_V4qBg - type: f1 value: 0.8727941247828231 name: F1 Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQ1ZGEwOTMxYjAzNjgxYmMzZGM1ZjkwNmNiYzdmOWE3MGI5NzY5NjM3ZDljZTVmZWQ4YThlMTExYjE2MzkxNyIsInZlcnNpb24iOjF9.y4K4-ICKWoib_dtJkrTjPrrrWVQO4vMJ4OZeXu4yrCHBEwc5Pa-605oDLjujZcVI5Vn2lE3piUUJn_Ko_eRKBQ - type: f1 value: 0.9185 name: F1 Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjBjYjUzZTlkYzJjZDhkMjM4MjBlZWYwNjA4NTZlZjY2Njc0ZDgyZjYyNjU5ZmM0YzY3ODFlN2ZlMWRiZDZmYiIsInZlcnNpb24iOjF9.WXwc2VTkkUDPCY5JxnHFPduRa_iViuxS3MvNiH4Od2kRNnIYxlFY2wo1yT3UQukAnz69Uq6M_aSi6a7qnxt7Bg - type: f1 value: 0.9177368694234422 name: F1 Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGMxMzBjOTNhOWVmZDg0NjlmMmFhY2RmYzc0YzRlMTkyN2E4NTVmYzdkYWEwMDljY2U5ZmQ5YmM5ZjlhYWNlMiIsInZlcnNpb24iOjF9.XcschKnQYuy1KCgM-eTPJxHaTyj4iRkmdc8Pyxa3i1b_7a8FOr5vBUdijrnh1sEj4Cg08yrM5o59sGWRz_ZuDg - type: loss value: 0.21991275250911713 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzJmMGRmZWZhYTdiMmE5ZjM2NDY1ZmUwNjJlZDU4YmZkNTMwYTczMDcyZjBmZDg1YjhjM2VmZWE5OTIyNTViMSIsInZlcnNpb24iOjF9.WczRZBXUG84OgZGRCJUq4bWOqVZN0Swd0A5mzRAj2YqTyx-ZMXqmMbrJ47bzHEE9_34B5SAGHFZWEf2879mIDw --- # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2287 - Accuracy: 0.918 - F1: 0.9182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8478 | 1.0 | 250 | 0.3294 | 0.9015 | 0.8980 | | 0.2616 | 2.0 | 500 | 0.2287 | 0.918 | 0.9182 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
1,228
lewtun/distilbert-base-uncased-finetuned-emotion-test-01
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion-test-01 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.39 - name: F1 type: f1 value: 0.21884892086330932 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion-test-01 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 1.7510 - Accuracy: 0.39 - F1: 0.2188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 2 | 1.7634 | 0.39 | 0.2188 | | No log | 2.0 | 4 | 1.7510 | 0.39 | 0.2188 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,229
lewtun/minilm-finetuned-emotion
[ "anger", "fear", "joy", "love", "sadness", "surprise" ]
--- license: mit tags: - generated_from_trainer datasets: - emotion metrics: - f1 model-index: - name: minilm-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: F1 type: f1 value: 0.9117582218338629 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # minilm-finetuned-emotion This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3891 - F1: 0.9118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.3957 | 1.0 | 250 | 1.0134 | 0.6088 | | 0.8715 | 2.0 | 500 | 0.6892 | 0.8493 | | 0.6085 | 3.0 | 750 | 0.4943 | 0.8920 | | 0.4626 | 4.0 | 1000 | 0.4096 | 0.9078 | | 0.3961 | 5.0 | 1250 | 0.3891 | 0.9118 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.6.0 - Datasets 1.15.1 - Tokenizers 0.10.3
1,230
lewtun/results
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: results results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.9251012149383893 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2147 - Accuracy: 0.925 - F1: 0.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8221 | 1.0 | 250 | 0.3106 | 0.9125 | 0.9102 | | 0.2537 | 2.0 | 500 | 0.2147 | 0.925 | 0.9251 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.0 - Tokenizers 0.10.3
1,231
lewtun/roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi
[ "NEGATIVO", "POSITIVO" ]
--- tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy model_index: - name: roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metric: name: Accuracy type: accuracy value: 0.9285 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi This model was trained from scratch on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.3595 - Accuracy: 0.9285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.103 | 1.0 | 1250 | 0.2864 | 0.928 | | 0.0407 | 2.0 | 2500 | 0.3595 | 0.9285 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
1,233
lewtun/xlm-roberta-base-finetuned-marc-500-samples
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4" ]
--- tags:text-classification ---
1,234
lewtun/xlm-roberta-base-finetuned-marc-de
[ "good", "great", "ok", "poor", "terrible" ]
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9934 - Mae: 0.4867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1514 | 1.0 | 308 | 1.0455 | 0.5221 | | 0.9997 | 2.0 | 616 | 0.9934 | 0.4867 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
1,235
lewtun/xlm-roberta-base-finetuned-marc-en-dummy
[ "good", "great", "ok", "poor", "terrible" ]
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en-dummy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en-dummy This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8931 - Mae: 0.4634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1258 | 1.0 | 235 | 0.9538 | 0.4390 | | 0.9445 | 2.0 | 470 | 0.8931 | 0.4634 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
1,236
lewtun/xlm-roberta-base-finetuned-marc-en-hslu
[ "good", "great", "ok", "poor", "terrible" ]
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en-hslu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en-hslu This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8826 - Mae: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1121 | 1.0 | 235 | 0.9400 | 0.5732 | | 0.9487 | 2.0 | 470 | 0.8826 | 0.5 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,237
lewtun/xlm-roberta-base-finetuned-marc-en
[ "good", "great", "ok", "poor", "terrible" ]
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8850 - Mae: 0.4390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1589 | 1.0 | 235 | 0.9769 | 0.5122 | | 0.974 | 2.0 | 470 | 0.8850 | 0.4390 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
1,238
lewtun/xlm-roberta-base-finetuned-marc
[ "good", "great", "ok", "poor", "terrible" ]
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9932 - Mae: 0.4838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.05 | 1.0 | 860 | 1.0007 | 0.5074 | | 0.9166 | 2.0 | 1720 | 0.9932 | 0.4838 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
1,239
lhoestq/distilbert-base-uncased-finetuned-absa-as
[ "NEGATIVE", "POSITIVE" ]
Distilbert finetuned for Aspect-Based Sentiment Analysis (ABSA) with auxiliary sentence. ```bibtex @inproceedings{sun-etal-2019-utilizing, title = "Utilizing {BERT} for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence", author = "Sun, Chi and Huang, Luyao and Qiu, Xipeng", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", month = jun, year = "2019", address = "Minneapolis, Minnesota", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/N19-1035", doi = "10.18653/v1/N19-1035", pages = "380--385", abstract = "Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA). In this paper, we construct an auxiliary sentence from the aspect and convert ABSA to a sentence-pair classification task, such as question answering (QA) and natural language inference (NLI). We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on SentiHood and SemEval-2014 Task 4 datasets. The source codes are available at https://github.com/HSLCY/ABSA-BERT-pair.", } ```
1,240
liam168/c2-roberta-base-finetuned-dianping-chinese
[ "negative", "positive" ]
--- language: zh widget: - text: "我喜欢下雨。" - text: "我讨厌他。" --- # liam168/c2-roberta-base-finetuned-dianping-chinese ## Model description 用中文对话情绪语料训练的模型,2分类:乐观和悲观。 ## Overview - **Language model**: BertForSequenceClassification - **Model size**: 410M - **Language**: Chinese ## Example ```python >>> from transformers import AutoModelForSequenceClassification , AutoTokenizer, pipeline >>> model_name = "liam168/c2-roberta-base-finetuned-dianping-chinese" >>> class_num = 2 >>> ts_texts = ["我喜欢下雨。", "我讨厌他."] >>> model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=class_num) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) >>> classifier(ts_texts[0]) >>> classifier(ts_texts[1]) [{'label': 'positive', 'score': 0.9973447918891907}] [{'label': 'negative', 'score': 0.9972558617591858}] ```
1,241
liam168/c4-zh-distilbert-base-uncased
[ "Female", "Sports", "Literature", "Campus" ]
--- language: zh tags: - exbert license: apache-2.0 widget: - text: "女人做得越纯粹,皮肤和身材就越好" - text: "我喜欢篮球" --- # liam168/c4-zh-distilbert-base-uncased ## Model description 用 ["女性","体育","文学","校园"]4类数据训练的分类模型。 ## Overview - **Language model**: DistilBERT - **Model size**: 280M - **Language**: Chinese ## Example ```python >>> from transformers import DistilBertForSequenceClassification , AutoTokenizer, pipeline >>> model_name = "liam168/c4-zh-distilbert-base-uncased" >>> class_num = 4 >>> ts_texts = ["女人做得越纯粹,皮肤和身材就越好", "我喜欢篮球"] >>> model = DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=class_num) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) >>> classifier(ts_texts[0]) >>> classifier(ts_texts[1]) [{'label': 'Female', 'score': 0.9137857556343079}] [{'label': 'Sports', 'score': 0.8206522464752197}] ```
1,242
lidiia/autonlp-trans_class_arg-32957902
[ "0.0", "1.0" ]
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - lidiia/autonlp-data-trans_class_arg co2_eq_emissions: 0.9756221672668951 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 32957902 - CO2 Emissions (in grams): 0.9756221672668951 ## Validation Metrics - Loss: 0.2765039801597595 - Accuracy: 0.8939828080229226 - Precision: 0.7757009345794392 - Recall: 0.8645833333333334 - AUC: 0.9552659749670619 - F1: 0.8177339901477833 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lidiia/autonlp-trans_class_arg-32957902 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lidiia/autonlp-trans_class_arg-32957902", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lidiia/autonlp-trans_class_arg-32957902", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,243
lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: en tags: - textual-entailment - nli - pytorch datasets: - mnli license: mit widget : - text: "EpCAM is overexpressed in breast cancer. </s></s> EpCAM is downregulated in breast cancer." --- # BiomedNLP-PubMedBERT finetuned on textual entailment (NLI) The [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext?text=%5BMASK%5D+is+a+tumor+suppressor+gene) finetuned on the MNLI dataset. It should be useful in textual entailment tasks involving biomedical corpora. ## Usage Given two sentences (a premise and a hypothesis), the model outputs the logits of entailment, neutral or contradiction. You can test the model using the HuggingFace model widget on the side: - Input two sentences (premise and hypothesis) one after the other. - The model returns the probabilities of 3 labels: entailment(LABEL:0), neutral(LABEL:1) and contradiction(LABEL:2) respectively. To use the model locally on your machine: ```python # import torch # device = torch.device("cuda" if torch.cuda.is_available() else "cpu") from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli") model = AutoModelForSequenceClassification.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli") premise = 'EpCAM is overexpressed in breast cancer' hypothesis = 'EpCAM is downregulated in breast cancer.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', truncation_strategy='only_first') logits = model(x)[0] probs = logits.softmax(dim=1) print('Probabilities for entailment, neutral, contradiction \n', np.around(probs.cpu(). detach().numpy(),3)) # Probabilities for entailment, neutral, contradiction # 0.001 0.001 0.998 ``` ## Metrics Evaluation on classification accuracy (entailment, contradiction, neutral) on MNLI test set: | Metric | Value | | --- | --- | | Accuracy | 0.8338| See Training Metrics tab for detailed info.
1,245
lighteternal/nli-xlm-r-greek
[ "contradiction", "entailment", "neutral" ]
--- language: - el - en tags: - xlm-roberta-base datasets: - multi_nli - snli - allnli_greek metrics: - accuracy pipeline_tag: zero-shot-classification widget: - text: "Η Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας." candidate_labels: "τεχνολογία, πολιτική, αθλητισμός" multi_class: false license: apache-2.0 --- # Cross-Encoder for Greek Natural Language Inference (Textual Entailment) & Zero-Shot Classification ## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data The model was trained on the the combined Greek+English version of the AllNLI dataset(sum of [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/)). The Greek part was created using the EN2EL NMT model available [here](https://huggingface.co/lighteternal/SSE-TUC-mt-en-el-cased). The model can be used in two ways: * NLI/Textual Entailment: For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. * Zero-shot classification through the Huggingface pipeline: Given a sentence and a set of labels/topics, it will output the likelihood of the sentence belonging to each of the topic. Under the hood, the logit for entailment between the sentence and each label is taken as the logit for the candidate label being valid. ## Performance Evaluation on classification accuracy (entailment, contradiction, neutral) on mixed (Greek+English) AllNLI-dev set: | Metric | Value | | --- | --- | | Accuracy | 0.8409 | ## To use the model for NLI/Textual Entailment #### Usage with sentence_transformers Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('lighteternal/nli-xlm-r-greek') scores = model.predict([('Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'), ('Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο'), ('Δυο γυναίκες μιλάνε στο κινητό', 'Το τραπέζι ήταν πράσινο')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] print(scores, labels) # Οutputs #[[-3.1526504 2.9981945 -0.3108107] # [ 5.0549307 -2.757949 -1.6220676] # [-0.5124733 -2.2671669 3.1630592]] ['entailment', 'contradiction', 'neutral'] ``` #### Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('lighteternal/nli-xlm-r-greek') tokenizer = AutoTokenizer.from_pretrained('lighteternal/nli-xlm-r-greek') features = tokenizer(['Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'], ['Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## To use the model for Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='lighteternal/nli-xlm-r-greek') sent = "Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας" candidate_labels = ["πολιτική", "τεχνολογία", "αθλητισμός"] res = classifier(sent, candidate_labels) print(res) #outputs: #{'sequence': 'Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας', 'labels': ['τεχνολογία', 'αθλητισμός', 'πολιτική'], 'scores': [0.8380699157714844, 0.09086982160806656, 0.07106029987335205]} ``` ### Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call) ### Citation info Citation for the Greek model TBA. Based on the work [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084) Kudos to @nreimers (Nils Reimers) for his support on Github .
1,246
lincoln/flaubert-mlsum-topic-classification
[ "Culture", "Economie", "Education", "Environement", "Justice", "Opinion", "Politique", "Societe", "Sport", "Technologie" ]
--- language: - fr license: mit datasets: - MLSUM pipeline_tag: "text-classification" widget: - text: La bourse de paris en forte baisse après que des canards ont envahit le parlement. tags: - text-classification - flaubert --- # Classification d'articles de presses avec Flaubert Ce modèle se base sur le modèle [`flaubert/flaubert_base_cased`](https://huggingface.co/flaubert/flaubert_base_cased) et à été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM. Dans leur papier, les équipes de reciTAL et de la Sorbonne ont proposé comme ouverture de réaliser un modèle de détection de topic sur les articles de presse. Les topics ont été extrait à partir des URL et nous avons effectué une étape de regroupement de topics pour éliminer ceux avec un trop faible volume et ceux qui paraissaient redondants. Nous avons finalement utilisé la liste de topics avec les regroupements suivants: * __Economie__: economie, argent, emploi, entreprises, economie-francaise, immobilier, crise-financiere, evasion-fiscale, economie-mondiale, m-voiture, smart-cities, automobile, logement, flottes-d-entreprise, import, crise-de-l-euro, guide-des-impots, le-club-de-l-economie, telephonie-mobile * __Opinion__: idees, les-decodeurs, tribunes * __Politique__: politique, election-presidentielle-2012, election-presidentielle-2017, elections-americaines, municipales, referendum-sur-le-brexit, elections-legislatives-2017, elections-regionales, donald-trump, elections-regionales-2015, europeennes-2014, elections-cantonales-2011, primaire-parti-socialiste, gouvernement-philippe, elections-departementales-2015, chroniques-de-la-presidence-trump, primaire-de-la-gauche, la-republique-en-marche, elections-americaines-mi-mandat-2018, elections, elections-italiennes, elections-senatoriales * __Societe__: societe, sante, attaques-a-paris, immigration-et-diversite, religions, medecine, francaises-francais, mobilite * __Culture__: televisions-radio, musiques, festival, arts, scenes, festival-de-cannes, mode, bande-dessinee, architecture, vins, photo, m-mode, fashion-week, les-recettes-du-monde, tele-zapping, critique-litteraire, festival-d-avignon, m-gastronomie-le-lieu, les-enfants-akira, gastronomie, culture, livres, cinema, actualite-medias, blog, m-gastronomie * __Sport__: sport, football, jeux-olympiques, ligue-1, tennis, coupe-du-monde, mondial-2018, rugby, euro-2016, jeux-olympiques-rio-2016, cyclisme, ligue-des-champions, basket, roland-garros, athletisme, tour-de-france, euro2012, jeux-olympiques-pyeongchang-2018, coupe-du-monde-rugby, formule-1, voile, top-14, ski, handball, sports-mecaniques, sports-de-combat, blog-du-tour-de-france, sport-et-societe, sports-de-glisse, tournoi-des-6-nations * __Environement__: planete, climat, biodiversite, pollution, energies, cop21 * __Technologie__: pixels, technologies, sciences, cosmos, la-france-connectee, trajectoires-digitales * __Education__: campus, education, bac-lycee, enseignement-superieur, ecole-primaire-et-secondaire, o21, orientation-scolaire, brevet-college * __Justice__: police-justice, panama-papers, affaire-penelope-fillon, documents-wikileaks, enquetes, paradise-papers Les thèmes ayant moins de 100 articles n'ont pas été pris en compte. Nous avons également mis de côté les articles faisant référence à des topics geographiques, ce qui a donné lieu à un nouveau modèle de classification. Après nettoyage, la base MLSUM a été réduite à 293 995 articles. Le corps d'un article en moyenne comporte 694 tokens. Nous avons entrainé le modèle sur 20% de la base nettoyée. En moyenne, le nombre d'articles par classe est de ~4K. ## Entrainement Nous avons benchmarké différents modèles en les entrainant sur différentes parties des articles (titre, résumé, corps et titre+résumé) et avec des échantillons d'apprentissage de tailles différentes. ![Performance](./assets/Accuracy_cat.png) Les modèles ont été entrainé sur le cloud Azure avec des Tesla V100. ## Modèle Le modèle partagé sur HF est le modéle qui prend en entrée le corps d'un article. Nous l'avons entrainé sur 20% du jeu de donnée nettoyé. ## Résulats ![Matrice de confusion](assets/confusion_cat_m_0.2.png) *Les lignes correspondent aux labels prédits et les colonnes aux véritables topics. Les pourcentages sont calculés sur les colonnes.* _Nous garantissons pas les résultats sur le long terme. Modèle réalisé dans le cadre d'un POC._ ## Utilisation ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import TextClassificationPipeline model_name = 'lincoln/flaubert-mlsum-topic-classification' loaded_tokenizer = AutoTokenizer.from_pretrained(model_name) loaded_model = AutoModelForSequenceClassification.from_pretrained(model_name) nlp = TextClassificationPipeline(model=loaded_model, tokenizer=loaded_tokenizer) nlp("Le Bayern Munich prend la grenadine.", truncation=True) ``` ## Citation ```bibtex @article{scialom2020mlsum, title={MLSUM: The Multilingual Summarization Corpus}, author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano}, year={2020}, eprint={2004.14900}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
1,248
lordtt13/emo-mobilebert
[ "angry", "happy", "others", "sad" ]
--- language: en datasets: - emo --- ## Emo-MobileBERT: a thin version of BERT LARGE, trained on the EmoContext Dataset from scratch ### Details of MobileBERT The **MobileBERT** model was presented in [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by *Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, Denny Zhou* and here is the abstract: Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90.0/79.2 (1.5/2.1 higher than BERT_BASE). ### Details of the downstream task (Emotion Recognition) - Dataset 📚 SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text In this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes: - sad 😢 - happy 😃 - angry 😡 - others ### Model training The training script is present [here](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/emo-mobilebert.ipynb). ### Pipelining the Model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("lordtt13/emo-mobilebert") model = AutoModelForSequenceClassification.from_pretrained("lordtt13/emo-mobilebert") nlp_sentence_classif = transformers.pipeline('sentiment-analysis', model = model, tokenizer = tokenizer) nlp_sentence_classif("I've never had such a bad day in my life") # Output: [{'label': 'sad', 'score': 0.93153977394104}] ``` > Created by [Tanmay Thakur](https://github.com/lordtt13) | [LinkedIn](https://www.linkedin.com/in/tanmay-thakur-6bb5a9154/)
1,249
lucasresck/bert-base-cased-ag-news
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- language: - en license: mit tags: - bert - classification datasets: - ag_news metrics: - accuracy - f1 - recall - precision widget: - text: "Is it soccer or football?" example_title: "Sports" - text: "A new version of Ubuntu was released." example_title: "Sci/Tech" --- # bert-base-cased-ag-news BERT model fine-tuned on AG News classification dataset using a linear layer on top of the [CLS] token output, with 0.945 test accuracy. ### How to use Here is how to use this model to classify a given text: ```python from transformers import AutoTokenizer, BertForSequenceClassification tokenizer = AutoTokenizer.from_pretrained('lucasresck/bert-base-cased-ag-news') model = BertForSequenceClassification.from_pretrained('lucasresck/bert-base-cased-ag-news') text = "Is it soccer or football?" encoded_input = tokenizer(text, return_tensors='pt', truncation=True, max_length=512) output = model(**encoded_input) ``` ### Limitations and bias Bias were not assessed in this model, but, considering that pre-trained BERT is known to carry bias, it is also expected for this model. BERT's authors say: "This bias will also affect all fine-tuned versions of this model." ## Evaluation results ``` precision recall f1-score support 0 0.9539 0.9584 0.9562 1900 1 0.9884 0.9879 0.9882 1900 2 0.9251 0.9095 0.9172 1900 3 0.9127 0.9242 0.9184 1900 accuracy 0.9450 7600 macro avg 0.9450 0.9450 0.9450 7600 weighted avg 0.9450 0.9450 0.9450 7600 ```
1,250
lucianpopa/autonlp-SST1-529214890
[ "0", "1", "2", "3", "4" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - lucianpopa/autonlp-data-SST1 co2_eq_emissions: 49.618294309910624 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 529214890 - CO2 Emissions (in grams): 49.618294309910624 ## Validation Metrics - Loss: 0.7135734558105469 - Accuracy: 0.7042338838232481 - Macro F1: 0.6164041045783032 - Micro F1: 0.7042338838232481 - Weighted F1: 0.7028309161791009 - Macro Precision: 0.6497438111060598 - Micro Precision: 0.7042338838232481 - Weighted Precision: 0.7076651075198755 - Macro Recall: 0.6023419083862918 - Micro Recall: 0.7042338838232481 - Weighted Recall: 0.7042338838232481 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lucianpopa/autonlp-SST1-529214890 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lucianpopa/autonlp-SST1-529214890", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lucianpopa/autonlp-SST1-529214890", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,251
lucianpopa/autonlp-SST2-551215591
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - lucianpopa/autonlp-data-SST2 co2_eq_emissions: 8.883161797287569 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 551215591 - CO2 Emissions (in grams): 8.883161797287569 ## Validation Metrics - Loss: 0.08821876347064972 - Accuracy: 0.969531605275125 - Precision: 0.9734313841774404 - Recall: 0.9710127780407004 - AUC: 0.9949152422763072 - F1: 0.9722205769116863 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lucianpopa/autonlp-SST2-551215591 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lucianpopa/autonlp-SST2-551215591", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lucianpopa/autonlp-SST2-551215591", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,252
lucianpopa/autonlp-TREC-classification-522314623
[ "0", "1", "2", "3", "4", "5" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - lucianpopa/autonlp-data-TREC-classification co2_eq_emissions: 15.186006626915715 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 522314623 - CO2 Emissions (in grams): 15.186006626915715 ## Validation Metrics - Loss: 0.24612033367156982 - Accuracy: 0.9643183897529735 - Macro F1: 0.9493690949638435 - Micro F1: 0.9643183897529735 - Weighted F1: 0.9642384162837268 - Macro Precision: 0.9372705571897225 - Micro Precision: 0.9643183897529735 - Weighted Precision: 0.9652870438320825 - Macro Recall: 0.9649638583139503 - Micro Recall: 0.9643183897529735 - Weighted Recall: 0.9643183897529735 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lucianpopa/autonlp-TREC-classification-522314623 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lucianpopa/autonlp-TREC-classification-522314623", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lucianpopa/autonlp-TREC-classification-522314623", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,253
luiz826/roberta-to-music-genre
[ "Alternative", "Country", "Eletronic Music", "Gospel and Worship Songs", "Hip-Hop", "Jazz/Blues", "Pop", "R&B/Soul", "Reggae", "Rock" ]
This model was made for a project in the NLP group of the Technology and Artificial Intelligence League (TAIL). We try to predict a music genre from the lyrics.
1,254
lumalik/vent-roberta-emotion
[ "Affection", "Anger", "Fear", "Happiness", "Sadness" ]
# Vent-roBERTa-emotion This is a roBERTa pretrained on twitter and then trained for self-labeled emotion classification on the Vent dataset (see https://arxiv.org/abs/1901.04856). The Vent dataset contains 33 million posts annotated with one emotion by the user themselves. <br/> The model was trained to recognize 5 emotions ("Affection", "Anger", "Fear", "Happiness", "Sadness") on 7 million posts from the dataset. <br/> Example of how to use the classifier on single texts. <br/> ```` from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax import torch tokenizer = AutoTokenizer.from_pretrained("lumalik/vent-roberta-emotion") model = AutoModelForSequenceClassification.from_pretrained("lumalik/vent-roberta-emotion") model.eval() texts = ["You wont believe what happened to me today", "You wont believe what happened to me today!", "You wont believe what happened to me today...", "You wont believe what happened to me today <3", "You wont believe what happened to me today :)", "You wont believe what happened to me today :("] for text in texts: encoded_text = tokenizer(text, return_tensors="pt") output = model(**encoded_text) output = softmax(output[0].detach().numpy(), axis=1) print("======================") print(text) print("Affection: {}".format(output[0][0])) print("Anger: {}".format(output[0][1])) print("Fear: {}".format(output[0][2])) print("Happiness: {}".format(output[0][3])) print("Sadness: {}".format(output[0][4])) ````
1,255
lvargas/distilbert-base-uncased-finetuned-emotion2
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion2 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.903 - name: F1 type: f1 value: 0.9003235459489749 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3623 - Accuracy: 0.903 - F1: 0.9003 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 125 | 0.5960 | 0.8025 | 0.7750 | | 0.7853 | 2.0 | 250 | 0.3623 | 0.903 | 0.9003 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
1,257
lvwerra/distilbert-imdb
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: distilbert-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.928 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset (training notebook is [here](https://huggingface.co/lvwerra/distilbert-imdb/blob/main/distilbert-imdb-training.ipynb)). It achieves the following results on the evaluation set: - Loss: 0.1903 - Accuracy: 0.928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2195 | 1.0 | 1563 | 0.1903 | 0.928 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
1,258
lysandre/dum
[ "NEGATIVE", "POSITIVE" ]
--- language: en license: apache-2.0 datasets: - sst2 tags: - OpenCLIP --- # Sentiment Analysis This is a BERT model fine-tuned for sentiment analysis.
1,260
lysandre/new-dummy-model
[ "NEGATIVE", "POSITIVE" ]
# Dummy model This is a dummy model.