modelId
stringlengths
6
107
label
list
readme
stringlengths
0
56.2k
readme_len
int64
0
56.2k
ahmedrachid/FinancialBERT-Sentiment-Analysis
[ "negative", "neutral", "positive" ]
--- language: en tags: - financial-sentiment-analysis - sentiment-analysis datasets: - financial_phrasebank widget: - text: Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales. - text: Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000. - text: Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008. --- ### FinancialBERT for Sentiment Analysis [*FinancialBERT*](https://huggingface.co/ahmedrachid/FinancialBERT) is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model. The model was fine-tuned for Sentiment Analysis task on _Financial PhraseBank_ dataset. Experiments show that this model outperforms the general BERT and other financial domain-specific models. More details on `FinancialBERT`'s pre-training process can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining ### Training data FinancialBERT model was fine-tuned on [Financial PhraseBank](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive). ### Fine-tuning hyper-parameters - learning_rate = 2e-5 - batch_size = 32 - max_seq_length = 512 - num_train_epochs = 5 ### Evaluation metrics The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set. | sentiment | precision | recall | f1-score | support | | ------------- |:-------------:|:-------------:|:-------------:| -----:| | negative | 0.96 | 0.97 | 0.97 | 58 | | neutral | 0.98 | 0.99 | 0.98 | 279 | | positive | 0.98 | 0.97 | 0.97 | 148 | | macro avg | 0.97 | 0.98 | 0.98 | 485 | | weighted avg | 0.98 | 0.98 | 0.98 | 485 | ### How to use The model can be used thanks to Transformers pipeline for sentiment analysis. ```python from transformers import BertTokenizer, BertForSequenceClassification from transformers import pipeline model = BertForSequenceClassification.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis",num_labels=3) tokenizer = BertTokenizer.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis") nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) sentences = ["Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales.", "Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.", "Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.", ] results = nlp(sentences) print(results) [{'label': 'positive', 'score': 0.9998133778572083}, {'label': 'neutral', 'score': 0.9997822642326355}, {'label': 'negative', 'score': 0.9877365231513977}] ``` > Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
3,448
textattack/albert-base-v2-imdb
null
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the imdb dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.89236, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
609
textattack/roberta-base-STS-B
[ "LABEL_0" ]
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 8, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a regression task, the model was trained with a mean squared error loss function. The best score the model achieved on this task was 0.9108696741479216, as measured by the eval set pearson correlation, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
629
yoshitomo-matsubara/bert-base-uncased-mnli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: en tags: - bert - mnli - ax - glue - torchdistill license: apache-2.0 datasets: - mnli - ax metrics: - accuracy --- `bert-base-uncased` fine-tuned on MNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb). The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mnli/ce/bert_base_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
836
philschmid/distilbert-base-multilingual-cased-sentiment-2
[ "negative", "neutral", "positive" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy - f1 model-index: - name: distilbert-base-multilingual-cased-sentiment-2 results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: all_languages metrics: - name: Accuracy type: accuracy value: 0.7475666666666667 - name: F1 type: f1 value: 0.7475666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-sentiment-2 This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.6067 - Accuracy: 0.7476 - F1: 0.7476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00024 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - distributed_type: sagemaker_data_parallel - num_devices: 8 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.6885 | 0.53 | 5000 | 0.6532 | 0.7217 | 0.7217 | | 0.6411 | 1.07 | 10000 | 0.6348 | 0.7319 | 0.7319 | | 0.6057 | 1.6 | 15000 | 0.6186 | 0.7387 | 0.7387 | | 0.5844 | 2.13 | 20000 | 0.6236 | 0.7449 | 0.7449 | | 0.549 | 2.67 | 25000 | 0.6067 | 0.7476 | 0.7476 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
2,316
fnlp/cpt-large
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- tags: - fill-mask - text2text-generation - fill-mask - text-classification - Summarization - Chinese - CPT - BART - BERT - seq2seq language: zh --- # Chinese CPT-Large ## Model description This is an implementation of CPT-Large. To use CPT, please import the file `modeling_cpt.py` (**Download** [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) that define the architecture of CPT into your project. [**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf) Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu **Github Link:** https://github.com/fastnlp/CPT ## Usage ```python >>> from modeling_cpt import CPTForConditionalGeneration >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("fnlp/cpt-large") >>> model = CPTForConditionalGeneration.from_pretrained("fnlp/cpt-large") >>> input_ids = tokenizer.encode("北京是[MASK]的首都", return_tensors='pt') >>> pred_ids = model.generate(input_ids, num_beams=4, max_length=20) >>> print(tokenizer.convert_ids_to_tokens(pred_ids[0])) ['[SEP]', '[CLS]', '北', '京', '是', '中', '国', '的', '首', '都', '[SEP]'] ``` **Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.** ## Citation ```bibtex @article{shao2021cpt, title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation}, author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu}, journal={arXiv preprint arXiv:2109.05729}, year={2021} } ```
1,687
Sigma/financial-sentiment-analysis
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- tags: - generated_from_trainer datasets: - financial_phrasebank metrics: - accuracy - f1 model-index: - name: financial-sentiment-analysis results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: financial_phrasebank args: sentences_allagree metrics: - name: Accuracy type: accuracy value: 0.9924242424242424 - name: F1 type: f1 value: 0.9924242424242424 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # financial-sentiment-analysis This model is a fine-tuned version of [ahmedrachid/FinancialBERT](https://huggingface.co/ahmedrachid/FinancialBERT) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.0395 - Accuracy: 0.9924 - F1: 0.9924 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
1,555
Jeevesh8/std_0pnt2_bert_ft_cola-0
null
Entry not found
15
NbAiLab/nb-bert-base-mnli
[ "contradiction", "neutral", "entailment" ]
--- language: no license: cc-by-4.0 thumbnail: https://raw.githubusercontent.com/NBAiLab/notram/master/images/nblogo_2.png pipeline_tag: zero-shot-classification tags: - nb-bert - zero-shot-classification - pytorch - tensorflow - norwegian - bert datasets: - mnli - multi_nli - xnli widget: - example_title: Nyhetsartikkel om FHI text: Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september. candidate_labels: helse, politikk, sport, religion --- **Release 1.0** (March 11, 2021) # NB-Bert base model finetuned on Norwegian machine translated MNLI ## Description The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible. [Yin et al.](https://arxiv.org/abs/1909.00161) proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport"). When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set. ## Testing the model For testing the model, we recommend the [NbAiLab Colab Notebook](https://colab.research.google.com/gist/peregilk/769b5150a2f807219ab8f15dd11ea449/nbailab-mnli-norwegian-demo.ipynb) ## Hugging Face zero-shot-classification pipeline The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one. ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="NbAiLab/nb-bert-base-mnli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = 'Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.' candidate_labels = ['politikk', 'helse', 'sport', 'religion'] hypothesis_template = 'Dette eksempelet er {}.' classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True) # {'labels': ['helse', 'politikk', 'sport', 'religion'], # 'scores': [0.4210019111633301, 0.0674605593085289, 0.000840459018945694, 0.0007541406666859984], # 'sequence': 'Folkehelseinstituttets mest optimistiske anslag er at alle over 18 år er ferdigvaksinert innen midten av september.'} ``` ## More information For more information on the model, see https://github.com/NBAiLab/notram Here you will also find a Colab explaining more in details how to use the zero-shot-classification pipeline.
2,939
cffl/bert-base-styleclassification-subjective-neutral
[ "NEUTRAL", "SUBJECTIVE" ]
--- license: apache-2.0 --- # bert-base-styleclassification-subjective-neutral ## Model description This [bert-base-uncased](https://huggingface.co/bert-base-uncased) model has been fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://arxiv.org/pdf/1911.09709.pdf) - a parallel corpus of 180,000 biased and neutralized sentence pairs along with contextual sentences and metadata. The model can be used to classify text as subjectively biased vs. neutrally toned. The development and modeling efforts that produced this model are documented in detail through [this blog series](https://blog.fastforwardlabs.com/2022/05/05/neutralizing-subjectivity-bias-with-huggingface-transformers.html). ## Intended uses & limitations The model is intended purely as a research output for NLP and data science communities. We developed this model for the purpose of evaluating text style transfer output. Specifically, we derive a Style Transfer Intensity (STI) metric from the classifier's output distributions. We also extract feautre importances from the model via [Integrated Gradients](https://arxiv.org/pdf/1703.01365.pdf) with support a Content Preservation Score (CPS). We imagine this model will be used by researchers to better understand the limitations, robustness, and generalization of text style transfer models. Ultimately, we hope this model will inspire future work on text style transfer and serve as a benchmarking tool for the style attribute of subjectivity bias, specifically. Any production use of this model - whether commercial or not - is currently not intended. This is because, as [the team at OpenAI points out](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases), large langauge models like BERT reflect biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans, unless the deployers first carry out a study of biases relevant to the intended use-case. Neither the model nor the WNC dataset has been sufficiently evaluated for performance and bias. As we discuss in the blog series, since the WNC is a parallel dataset and we formulate the learning task as a supervised problem, the model indirectly adopts Wikipedia's NPOV policy as the definition for "neutrality" and "subjectivity". The NPOV policy may not fully reflect an end users assumed/intended meaning of subjectivity because the notion of subjectivity itself can be...well, subjective. We discovered through our exploratory work that the WNC does contain data quality issues that will contribute to unintended bias in the model. For example, some NPOV revisions introduce factual information outside the context of the prompt as a means to correct bias. We believe these factual based edits are out of scope for a subjective-to-neutral style transfer modeling task, but exist here nonetheless. ## How to use This model can be used directly with a HuggingFace pipeline for `text2text-generation`. ```python >>> from transformers import pipeline >>> classify = pipeline( task="text-classification", model="cffl/bert-base-styleclassification-subjective-neutral", return_all_scores=True, ) >>> input_text = "chemical abstracts service (cas), a prominent division of the american chemical society, is the world's leading source of chemical information." >>> classify(input_text) [[{'label': 'SUBJECTIVE', 'score': 0.9765084385871887}, {'label': 'NEUTRAL', 'score': 0.023491567000746727}]] ``` ## Training procedure For training, we initialize HuggingFace’s [AutoModelforSequenceClassification](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForSequenceClassification) with [bert-base-uncased](https://huggingface.co/bert-base-uncased) pre-trained weights and perform a hyperparameter search over: batch size [16, 32], learning rate [3e-05, 3e-06, 3e-07], weight decay [0, 0.01, 0.1] and batch shuffling [True, False] while training for 15 epochs. We monitor performance using accuracy as we have a perfectly balanced dataset and assign equal cost to false positives and false negatives. The best performing model produces an overall accuracy of 72.50% -- please reference our [training script](https://github.com/fastforwardlabs/text-style-transfer/blob/main/scripts/train/classifier/train_classifier.py) and [classifier evaluation notebook](https://github.com/fastforwardlabs/text-style-transfer/blob/main/notebooks/WNC_full_style_classifier_evaluation.ipynb) for further details.
4,530
huggingface/CodeBERTa-language-id
[ "go", "java", "javascript", "php", "python", "ruby" ]
--- language: code thumbnail: https://cdn-media.huggingface.co/CodeBERTa/CodeBERTa.png datasets: - code_search_net --- # CodeBERTa-language-id: The World’s fanciest programming language identification algo 🤯 To demonstrate the usefulness of our CodeBERTa pretrained model on downstream tasks beyond language modeling, we fine-tune the [`CodeBERTa-small-v1`](https://huggingface.co/huggingface/CodeBERTa-small-v1) checkpoint on the task of classifying a sample of code into the programming language it's written in (*programming language identification*). We add a sequence classification head on top of the model. On the evaluation dataset, we attain an eval accuracy and F1 > 0.999 which is not surprising given that the task of language identification is relatively easy (see an intuition why, below). ## Quick start: using the raw model ```python CODEBERTA_LANGUAGE_ID = "huggingface/CodeBERTa-language-id" tokenizer = RobertaTokenizer.from_pretrained(CODEBERTA_LANGUAGE_ID) model = RobertaForSequenceClassification.from_pretrained(CODEBERTA_LANGUAGE_ID) input_ids = tokenizer.encode(CODE_TO_IDENTIFY) logits = model(input_ids)[0] language_idx = logits.argmax() # index for the resulting label ``` ## Quick start: using Pipelines 💪 ```python from transformers import TextClassificationPipeline pipeline = TextClassificationPipeline( model=RobertaForSequenceClassification.from_pretrained(CODEBERTA_LANGUAGE_ID), tokenizer=RobertaTokenizer.from_pretrained(CODEBERTA_LANGUAGE_ID) ) pipeline(CODE_TO_IDENTIFY) ``` Let's start with something very easy: ```python pipeline(""" def f(x): return x**2 """) # [{'label': 'python', 'score': 0.9999965}] ``` Now let's probe shorter code samples: ```python pipeline("const foo = 'bar'") # [{'label': 'javascript', 'score': 0.9977546}] ``` What if I remove the `const` token from the assignment? ```python pipeline("foo = 'bar'") # [{'label': 'javascript', 'score': 0.7176245}] ``` For some reason, this is still statistically detected as JS code, even though it's also valid Python code. However, if we slightly tweak it: ```python pipeline("foo = u'bar'") # [{'label': 'python', 'score': 0.7638422}] ``` This is now detected as Python (Notice the `u` string modifier). Okay, enough with the JS and Python domination already! Let's try fancier languages: ```python pipeline("echo $FOO") # [{'label': 'php', 'score': 0.9995257}] ``` (Yes, I used the word "fancy" to describe PHP 😅) ```python pipeline("outcome := rand.Intn(6) + 1") # [{'label': 'go', 'score': 0.9936151}] ``` Why is the problem of language identification so easy (with the correct toolkit)? Because code's syntax is rigid, and simple tokens such as `:=` (the assignment operator in Go) are perfect predictors of the underlying language: ```python pipeline(":=") # [{'label': 'go', 'score': 0.9998052}] ``` By the way, because we trained our own custom tokenizer on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset, and it handles streams of bytes in a very generic way, syntactic constructs such `:=` are represented by a single token: ```python self.tokenizer.encode(" :=", add_special_tokens=False) # [521] ``` <br> ## Fine-tuning code <details> ```python import gzip import json import logging import os from pathlib import Path from typing import Dict, List, Tuple import numpy as np import torch from sklearn.metrics import f1_score from tokenizers.implementations.byte_level_bpe import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing from torch.nn.utils.rnn import pad_sequence from torch.utils.data import DataLoader, Dataset from torch.utils.data.dataset import Dataset from torch.utils.tensorboard.writer import SummaryWriter from tqdm import tqdm, trange from transformers import RobertaForSequenceClassification from transformers.data.metrics import acc_and_f1, simple_accuracy logging.basicConfig(level=logging.INFO) CODEBERTA_PRETRAINED = "huggingface/CodeBERTa-small-v1" LANGUAGES = [ "go", "java", "javascript", "php", "python", "ruby", ] FILES_PER_LANGUAGE = 1 EVALUATE = True # Set up tokenizer tokenizer = ByteLevelBPETokenizer("./pretrained/vocab.json", "./pretrained/merges.txt",) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) # Set up Tensorboard tb_writer = SummaryWriter() class CodeSearchNetDataset(Dataset): examples: List[Tuple[List[int], int]] def __init__(self, split: str = "train"): """ train | valid | test """ self.examples = [] src_files = [] for language in LANGUAGES: src_files += list( Path("../CodeSearchNet/resources/data/").glob(f"{language}/final/jsonl/{split}/*.jsonl.gz") )[:FILES_PER_LANGUAGE] for src_file in src_files: label = src_file.parents[3].name label_idx = LANGUAGES.index(label) print("🔥", src_file, label) lines = [] fh = gzip.open(src_file, mode="rt", encoding="utf-8") for line in fh: o = json.loads(line) lines.append(o["code"]) examples = [(x.ids, label_idx) for x in tokenizer.encode_batch(lines)] self.examples += examples print("🔥🔥") def __len__(self): return len(self.examples) def __getitem__(self, i): # We’ll pad at the batch level. return self.examples[i] model = RobertaForSequenceClassification.from_pretrained(CODEBERTA_PRETRAINED, num_labels=len(LANGUAGES)) train_dataset = CodeSearchNetDataset(split="train") eval_dataset = CodeSearchNetDataset(split="test") def collate(examples): input_ids = pad_sequence([torch.tensor(x[0]) for x in examples], batch_first=True, padding_value=1) labels = torch.tensor([x[1] for x in examples]) # ^^ uncessary .unsqueeze(-1) return input_ids, labels train_dataloader = DataLoader(train_dataset, batch_size=256, shuffle=True, collate_fn=collate) batch = next(iter(train_dataloader)) model.to("cuda") model.train() for param in model.roberta.parameters(): param.requires_grad = False ## ^^ Only train final layer. print(f"num params:", model.num_parameters()) print(f"num trainable params:", model.num_parameters(only_trainable=True)) def evaluate(): eval_loss = 0.0 nb_eval_steps = 0 preds = np.empty((0), dtype=np.int64) out_label_ids = np.empty((0), dtype=np.int64) model.eval() eval_dataloader = DataLoader(eval_dataset, batch_size=512, collate_fn=collate) for step, (input_ids, labels) in enumerate(tqdm(eval_dataloader, desc="Eval")): with torch.no_grad(): outputs = model(input_ids=input_ids.to("cuda"), labels=labels.to("cuda")) loss = outputs[0] logits = outputs[1] eval_loss += loss.mean().item() nb_eval_steps += 1 preds = np.append(preds, logits.argmax(dim=1).detach().cpu().numpy(), axis=0) out_label_ids = np.append(out_label_ids, labels.detach().cpu().numpy(), axis=0) eval_loss = eval_loss / nb_eval_steps acc = simple_accuracy(preds, out_label_ids) f1 = f1_score(y_true=out_label_ids, y_pred=preds, average="macro") print("=== Eval: loss ===", eval_loss) print("=== Eval: acc. ===", acc) print("=== Eval: f1 ===", f1) # print(acc_and_f1(preds, out_label_ids)) tb_writer.add_scalars("eval", {"loss": eval_loss, "acc": acc, "f1": f1}, global_step) ### Training loop global_step = 0 train_iterator = trange(0, 4, desc="Epoch") optimizer = torch.optim.AdamW(model.parameters()) for _ in train_iterator: epoch_iterator = tqdm(train_dataloader, desc="Iteration") for step, (input_ids, labels) in enumerate(epoch_iterator): optimizer.zero_grad() outputs = model(input_ids=input_ids.to("cuda"), labels=labels.to("cuda")) loss = outputs[0] loss.backward() tb_writer.add_scalar("training_loss", loss.item(), global_step) optimizer.step() global_step += 1 if EVALUATE and global_step % 50 == 0: evaluate() model.train() evaluate() os.makedirs("./models/CodeBERT-language-id", exist_ok=True) model.save_pretrained("./models/CodeBERT-language-id") ``` </details> <br> ## CodeSearchNet citation <details> ```bibtex @article{husain_codesearchnet_2019, title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}}, shorttitle = {{CodeSearchNet} {Challenge}}, url = {http://arxiv.org/abs/1909.09436}, urldate = {2020-03-12}, journal = {arXiv:1909.09436 [cs, stat]}, author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, month = sep, year = {2019}, note = {arXiv: 1909.09436}, } ``` </details>
8,962
textattack/xlnet-base-cased-SST-2
null
Entry not found
15
mdraw/german-news-sentiment-bert
[ "negative", "neutral", "positive" ]
# German sentiment BERT finetuned on news data Sentiment analysis model based on https://huggingface.co/oliverguhr/german-sentiment-bert, with additional training on German news texts about migration. This model is part of the project https://github.com/text-analytics-20/news-sentiment-development, which explores sentiment development in German news articles about migration between 2007 and 2019. Code for inference (predicting sentiment polarity) on raw text can be found at https://github.com/text-analytics-20/news-sentiment-development/blob/main/sentiment_analysis/bert.py If you are not interested in polarity but just want to predict discrete class labels (0: positive, 1: negative, 2: neutral), you can also use the model with Oliver Guhr's `germansentiment` package as follows: First install the package from PyPI: ```bash pip install germansentiment ``` Then you can use the model in Python: ```python from germansentiment import SentimentModel model = SentimentModel('mdraw/german-news-sentiment-bert') # Examples from our validation dataset texts = [ '[...], schwärmt der parteilose Vizebürgermeister und Historiker Christian Matzka von der "tollen Helferszene".', 'Flüchtlingsheim 11.05 Uhr: Massenschlägerei', 'Rotterdam habe einen Migrantenanteil von mehr als 50 Prozent.', ] result = model.predict_sentiment(texts) print(result) ``` The code above will print: ```python ['positive', 'negative', 'neutral'] ```
1,454
Jeevesh8/std_0pnt2_bert_ft_cola-1
null
Entry not found
15
textattack/roberta-base-ag-news
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
## TextAttack Model CardThis `roberta-base` model was fine-tuned for sequence classification using TextAttack and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 5e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9469736842105263, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
620
Jeevesh8/std_0pnt2_bert_ft_cola-2
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-6
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-3
null
Entry not found
15
SkolkovoInstitute/russian_toxicity_classifier
[ "neutral", "toxic" ]
--- language: - ru tags: - toxic comments classification licenses: - cc-by-nc-sa --- Bert-based classifier (finetuned from [Conversational Rubert](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)) trained on merge of Russian Language Toxic Comments [dataset](https://www.kaggle.com/blackmoon/russian-language-toxic-comments/metadata) collected from 2ch.hk and Toxic Russian Comments [dataset](https://www.kaggle.com/alexandersemiletov/toxic-russian-comments) collected from ok.ru. The datasets were merged, shuffled, and split into train, dev, test splits in 80-10-10 proportion. The metrics obtained from test dataset is as follows | | precision | recall | f1-score | support | |:------------:|:---------:|:------:|:--------:|:-------:| | 0 | 0.98 | 0.99 | 0.98 | 21384 | | 1 | 0.94 | 0.92 | 0.93 | 4886 | | accuracy | | | 0.97 | 26270| | macro avg | 0.96 | 0.96 | 0.96 | 26270 | | weighted avg | 0.97 | 0.97 | 0.97 | 26270 | ## How to use ```python from transformers import BertTokenizer, BertForSequenceClassification # load tokenizer and model weights tokenizer = BertTokenizer.from_pretrained('SkolkovoInstitute/russian_toxicity_classifier') model = BertForSequenceClassification.from_pretrained('SkolkovoInstitute/russian_toxicity_classifier') # prepare the input batch = tokenizer.encode('ты супер', return_tensors='pt') # inference model(batch) ``` ## Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
1,819
Jeevesh8/std_0pnt2_bert_ft_cola-7
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-4
null
Entry not found
15
cross-encoder/stsb-TinyBERT-L-4
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
941
Jeevesh8/std_0pnt2_bert_ft_cola-9
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-5
null
Entry not found
15
Hate-speech-CNERG/dehatebert-mono-english
[ "NON_HATE", "HATE" ]
--- language: en license: apache-2.0 --- This model is used detecting **hatespeech** in **English language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.726030 for a learning rate of 2e-5. Training code can be found here https://github.com/punyajoy/DE-LIMIT ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
1,047
Jeevesh8/std_0pnt2_bert_ft_cola-8
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-10
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-11
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-14
null
Entry not found
15
textattack/distilbert-base-uncased-CoLA
null
## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 64, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.8235858101629914, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
530
Jeevesh8/std_0pnt2_bert_ft_cola-12
null
Entry not found
15
bhadresh-savani/roberta-base-emotion
[ "anger", "fear", "joy", "love", "sadness", "surprise" ]
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - emotion - pytorch license: apache-2.0 datasets: - emotion metrics: - Accuracy, F1 Score model-index: - name: bhadresh-savani/roberta-base-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: default split: test metrics: - name: Accuracy type: accuracy value: 0.931 verified: true - name: Precision Macro type: precision value: 0.9168321948556312 verified: true - name: Precision Micro type: precision value: 0.931 verified: true - name: Precision Weighted type: precision value: 0.9357445689014415 verified: true - name: Recall Macro type: recall value: 0.8743657671177089 verified: true - name: Recall Micro type: recall value: 0.931 verified: true - name: Recall Weighted type: recall value: 0.931 verified: true - name: F1 Macro type: f1 value: 0.8821236522209227 verified: true - name: F1 Micro type: f1 value: 0.931 verified: true - name: F1 Weighted type: f1 value: 0.9300782840205046 verified: true - name: loss type: loss value: 0.15155883133411407 verified: true --- # robert-base-emotion ## Model description: [roberta](https://arxiv.org/abs/1907.11692) is Bert with better hyperparameter choices so they said it's Robustly optimized Bert during pretraining. [roberta-base](https://huggingface.co/roberta-base) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters ``` learning rate 2e-5, batch size 64, num_train_epochs=8, ``` ## Model Performance Comparision on Emotion Dataset from Twitter: | Model | Accuracy | F1 Score | Test Sample per Second | | --- | --- | --- | --- | | [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 | | [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 | | [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 | | [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 | ## How to Use the model: ```python from transformers import pipeline classifier = pipeline("text-classification",model='bhadresh-savani/roberta-base-emotion', return_all_scores=True) prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", ) print(prediction) """ Output: [[ {'label': 'sadness', 'score': 0.002281982684507966}, {'label': 'joy', 'score': 0.9726489186286926}, {'label': 'love', 'score': 0.021365027874708176}, {'label': 'anger', 'score': 0.0026395076420158148}, {'label': 'fear', 'score': 0.0007162453257478774}, {'label': 'surprise', 'score': 0.0003483477921690792} ]] """ ``` ## Dataset: [Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion). ## Training procedure [Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb) follow the above notebook by changing the model name to roberta ## Eval results ```json { 'test_accuracy': 0.9395, 'test_f1': 0.9397328860104454, 'test_loss': 0.14367154240608215, 'test_runtime': 10.2229, 'test_samples_per_second': 195.639, 'test_steps_per_second': 3.13 } ``` ## Reference: * [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/)
3,928
SkolkovoInstitute/rubert-base-corruption-detector
[ "unnatural", "natural" ]
--- language: - ru tags: - fluency --- This is a model for evaluation of naturalness of short Russian texts. It has been trained to distinguish human-written texts from their corrupted versions. Corruption sources: random replacement, deletion, addition, shuffling, and re-inflection of words and characters, random changes of capitalization, round-trip translation, filling random gaps with T5 and RoBERTA models. For each original text, we sampled three corrupted texts, so the model is uniformly biased towards the `unnatural` label. Data sources: web-corpora from [the Leipzig collection](https://wortschatz.uni-leipzig.de/en/download) (`rus_news_2020_100K`, `rus_newscrawl-public_2018_100K`, `rus-ru_web-public_2019_100K`, `rus_wikipedia_2021_100K`), comments from [OK](https://www.kaggle.com/alexandersemiletov/toxic-russian-comments) and [Pikabu](https://www.kaggle.com/blackmoon/russian-language-toxic-comments). On our private test dataset, the model has achieved 40% rank correlation with human judgements of naturalness, which is higher than GPT perplexity, another popular fluency metric.
1,110
Jeevesh8/std_0pnt2_bert_ft_cola-13
null
Entry not found
15
microsoft/deberta-base-mnli
[ "CONTRADICTION", "NEUTRAL", "ENTAILMENT" ]
--- language: en tags: - deberta-v1 - deberta-mnli tasks: mnli thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit widget: - text: "[CLS] I love you. [SEP] I like you. [SEP]" --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This model is the base DeBERTa model fine-tuned with MNLI task #### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and MNLI tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m | |-------------------|-----------|-----------|--------| | RoBERTa-base | 91.5/84.6 | 83.7/80.5 | 87.6 | | XLNet-Large | -/- | -/80.2 | 86.8 | | **DeBERTa-base** | 93.1/87.2 | 86.2/83.1 | 88.8 | ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
1,441
Jeevesh8/std_0pnt2_bert_ft_cola-17
null
Entry not found
15
svalabs/cross-electra-ms-marco-german-uncased
[ "LABEL_0" ]
# SVALabs - German Uncased Electra Cross-Encoder In this repository, we present our german, uncased cross-encoder for Passage Retrieval. This model was trained on the basis of the german electra uncased model from the [german-nlp-group](https://huggingface.co/german-nlp-group/electra-base-german-uncased) and finetuned as a cross-encoder for Passage Retrieval using the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) package. For this purpose, we translated the [MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) dataset using the [fairseq-wmt19-en-de](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) translation model. ### Model Details | | Description or Link | |---|---| |**Base model** | [```german-nlp-group/electra-base-german-uncased```](https://huggingface.co/german-nlp-group/electra-base-german-uncased) | |**Finetuning task**| Passage Retrieval / Semantic Search | |**Source dataset**| [```MSMARCO-Passage-Ranking```](https://github.com/microsoft/MSMARCO-Passage-Ranking) | |**Translation model**| [```fairseq-wmt19-en-de```](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) | ### Performance We evaluated our model on the [GermanDPR testset](https://deepset.ai/germanquad) and followed the benchmark framework of [BEIR](https://github.com/UKPLab/beir). In order to compare our results, we conducted an evaluation on the same test data with BM25 and presented the results in the table below. We took every paragraph with negative and positive context out of the testset and deduplicated them. The resulting corpus size is 2871 against 1025 queries. | Model | NDCG@1 | NDCG@5 | NDCG@10 | Recall@1 | Recall@5 | Recall@10 | |:-------------------:|:------:|:------:|:-------:|:--------:|:--------:|:---------:| | BM25 | 0.1463 | 0.3451 | 0.4097 | 0.1463 | 0.5424 | 0.7415 | | BM25(Top 100) +Ours | 0.6410 | 0.7885 | 0.7943 | 0.6410 | 0.8576 | 0.9024 | ### How to Use With ```sentence-transformers``` package (see [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers) on GitHub for more details): ```python from sentence_transformers.cross_encoder import CrossEncoder cross_model = CrossEncoder("svalabs/cross-electra-ms-marco-german-uncased") ``` ### Semantic Search Example ```python import numpy as np from sklearn.metrics.pairwise import cosine_similarity K = 3 # number of top ranks to retrieve docs = [ "Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.", "Der Gepard jagt seine Beute.", "Wir haben in der Agentur ein neues System für Zeiterfassung.", "Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte.", "Einen Impftermin kann mir der Arzt momentan noch nicht anbieten.", "Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut.", "Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig.", "Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen.", "Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.", "Bei ALDI sind die Bananen gerade im Angebot.", "Die Entstehung der Erde ist 4,5 milliarden jahre her.", "Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.", "DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main." ] queries = [ "dax steigt", "dax sinkt", "probleme mit knieschmerzen", "software für urlaubsstunden", "raubtier auf der jagd", "alter der erde", "wie alt ist unser planet?", "wie kapital sichern", "supermarkt lebensmittel reduziert", "wodurch ist der tyrannosaurus aussgestorben", "serien streamen" ] # encode each query document pair from itertools import product combs = list(product(queries, docs)) outputs = cross_model.predict(combs).reshape((len(queries), len(docs))) # print results for i, query in enumerate(queries): ranks = np.argsort(-outputs[i]) print("Query:", query) for j, r in enumerate(ranks[:3]): print(f"[{j}: {outputs[i, r]: .3f}]", docs[r]) print("-"*96) ``` **Console Output**: ``` Query: dax steigt [0: 7.676] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. [1: 0.821] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. [2: -9.905] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen. ------------------------------------------------------------------------------------------------ Query: dax sinkt [0: 8.079] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. [1: -0.491] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. [2: -9.224] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen. ------------------------------------------------------------------------------------------------ Query: probleme mit knieschmerzen [0: 6.753] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte. [1: -5.866] Einen Impftermin kann mir der Arzt momentan noch nicht anbieten. [2: -9.461] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut. ------------------------------------------------------------------------------------------------ Query: software für urlaubsstunden [0: 1.707] Wir haben in der Agentur ein neues System für Zeiterfassung. [1: -10.649] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte. [2: -11.280] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. ------------------------------------------------------------------------------------------------ Query: raubtier auf der jagd [0: 4.596] Der Gepard jagt seine Beute. [1: -6.809] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. [2: -8.392] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig. ------------------------------------------------------------------------------------------------ Query: alter der erde [0: 7.343] Die Entstehung der Erde ist 4,5 milliarden jahre her. [1: -7.664] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet. [2: -8.020] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig. ------------------------------------------------------------------------------------------------ Query: wie alt ist unser planet? [0: 7.672] Die Entstehung der Erde ist 4,5 milliarden jahre her. [1: -9.638] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet. [2: -10.251] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut. ------------------------------------------------------------------------------------------------ Query: wie kapital sichern [0: 3.927] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen. [1: -8.733] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben. [2: -10.090] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte. ------------------------------------------------------------------------------------------------ Query: supermarkt lebensmittel reduziert [0: 3.508] Bei ALDI sind die Bananen gerade im Angebot. [1: -10.057] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig. [2: -10.470] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main. ------------------------------------------------------------------------------------------------ Query: wodurch ist der tyrannosaurus aussgestorben [0: 0.079] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet. [1: -10.701] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte. [2: -11.200] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. ------------------------------------------------------------------------------------------------ Query: serien streamen [0: 3.392] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie. [1: -5.725] Der Gepard jagt seine Beute. [2: -8.378] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut. ------------------------------------------------------------------------------------------------ ``` ### Contact - Baran Avinc, baran.avinc@sva.de - Jonas Grebe, jonas.grebe@sva.de - Lisa Stolz, lisa.stolz@sva.de - Bonian Riebe, bonian.riebe@sva.de ### References - N. Reimers and I. Gurevych (2019), ['Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks'](https://arxiv.org/abs/1908.10084). - Payal Bajaj et al. (2018), ['MS MARCO: A Human Generated MAchine Reading COmprehension Dataset'](https://arxiv.org/abs/1611.09268). - N. Thakur et al. (2021), ['BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models'](https://arxiv.org/abs/2104.08663). - T. Möller, J. Risch and M. Pietsch (2021), ['GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval'](https://arxiv.org/abs/2104.12741). - Hofstätter et al. (2021), ['Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation'](https://arxiv.org/abs/2010.02666)
9,543
Jeevesh8/std_0pnt2_bert_ft_cola-15
null
Entry not found
15
DaNLP/da-bert-emotion-binary
[ "emotional", "no emotion" ]
--- language: - da tags: - bert - pytorch - emotion license: cc-by-sa-4.0 datasets: - social media metrics: - f1 widget: - text: Der er et træ i haven. --- # Danish BERT for emotion detection The BERT Emotion model detects whether a Danish text is emotional or not. It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("DaNLP/da-bert-emotion-binary") tokenizer = BertTokenizer.from_pretrained("DaNLP/da-bert-emotion-binary") ``` ## Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
1,022
Jeevesh8/std_0pnt2_bert_ft_cola-16
null
Entry not found
15
cardiffnlp/tweet-topic-21-multi
[ "arts_&_culture", "business_&_entrepreneurs", "celebrity_&_pop_culture", "diaries_&_daily_life", "family", "fashion_&_style", "film_tv_&_video", "fitness_&_health", "food_&_dining", "gaming", "learning_&_educational", "music", "news_&_social_concern", "other_hobbies", "relationships", "science_&_technology", "sports", "travel_&_adventure", "youth_&_student_life" ]
# tweet-topic-21-multi This is a roBERTa-base model trained on ~124M tweets from January 2018 to December 2021 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m)), and finetuned for multi-label topic classification on a corpus of 11,267 tweets. The original roBERTa-base model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English. - Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829). - Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms). <b>Labels</b>: | <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> | |-----------------------------|---------------------|----------------------------|--------------------------| | 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports | | 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure | | 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life | | 4: family | 9: gaming | 14: relationships | | ## Full classification example ```python from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import expit MODEL = f"cardiffnlp/tweet-topic-21-single" tokenizer = AutoTokenizer.from_pretrained(MODEL) # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) class_mapping = model.config.id2label text = "It is great to see athletes promoting awareness for climate change." tokens = tokenizer(text, return_tensors='pt') output = model(**tokens) scores = output[0][0].detach().numpy() scores = expit(scores) predictions = (scores >= 0.5) * 1 # TF #tf_model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) #class_mapping = model.config.id2label #text = "It is great to see athletes promoting awareness for climate change." #tokens = tokenizer(text, return_tensors='tf') #output = tf_model(**tokens) #scores = output[0][0] #scores = expit(scores) #predictions = (scores >= 0.5) * 1 # Map to classes for i in range(len(predictions)): if predictions[i]: print(class_mapping[i]) ``` Output: ``` news_&_social_concern sports ```
2,708
Jeevesh8/std_0pnt2_bert_ft_cola-18
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-19
null
Entry not found
15
MoritzLaurer/policy-distilbert-7d
[ "external relations", "freedom and democracy", "political system", "economy", "welfare and quality of life", "fabric of society", "social groups" ]
--- language: - en tags: - text-classification metrics: - accuracy (balanced) - F1 (weighted) widget: - text: "70-85% of the population needs to get vaccinated against the novel coronavirus to achieve herd immunity." --- # Policy-DistilBERT-7d ## Model description This model was trained on 129.669 manually annotated sentences to classify text into one of seven political categories: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups'. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/policy-distilbert-7d" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) text = "The new variant first detected in southern England in September is blamed for sharp rises in levels of positive tests in recent weeks in London, south-east England and the east of England" input = tokenizer(text, truncation=True, return_tensors="pt") output = model(input["input_ids"]) # the output corresponds to the following labels: # 0: external relations, 1: freedom and democracy, 2: political system, 3: economy, 4: welfare and quality of life, 5: fabric of society, 6: social groups # output to dictionary prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["external relations", "freedom and democracy", "political system", "economy", "welfare and quality of life", "fabric of society", "social groups"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) #{'external relations': 0.0, 'freedom and democracy': 0.0, 'political system': 0.9, 'economy': 0.4, # 'welfare and quality of life': 98.3, 'fabric of society': 0.3, 'social groups': 0.0} ``` ### Training data Policy-DistilBERT-7d was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2020a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 129.669 sentences from 164 political manifestos from 55 political parties in 8 English-speaking countries (Australia, Canada, Ireland, Israel, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2019. The Manifesto Project mannually annotates individual sentences from political party manifestos in 7 main political domains: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups' - see the [codebook](https://manifesto-project.wzb.eu/down/data/2020b/codebooks/codebook_MPDataset_MPDS2020b.pdf) for the exact definitions of each domain. ### Training procedure `distilbert-base-uncased` was trained using the Hugging Face trainer with the following hyperparameters. The hyperparameters were determined using a hyperparameter search on a 15% validation set. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=4e-05, per_device_train_batch_size=4, # batch size per device during training per_device_eval_batch_size=4, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.02, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using 15% of the sentences (85-15 train-test split). accuracy (balanced) | F1 (weighted) | precision | recall | accuracy (not balanced) -------|---------|----------|---------|---------- 0.745 | 0.773 | 0.772 | 0.771 | 0.771 Please note that the label distribution in the dataset is imbalanced: ``` Welfare and Quality of Life 0.327225 Economy 0.259191 Fabric of Society 0.111800 Political System 0.095081 Social Groups 0.094371 External Relations 0.063724 Freedom and Democracy 0.048608 ``` [Balanced accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html) and [weighted F1](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html) were therefore used to evaluate model performance. ## Limitations and bias The model was trained on sentences in political manifestos from parties in the 8 countries mentioned above between 1992-2019, manually annotated by the [Manifesto Project](https://manifesto-project.wzb.eu/information/documents/information). The model output therefore reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance. ### BibTeX entry and citation info ```bibtex @unpublished{ title={Policy-DistilBERT}, author={Moritz Laurer}, year={2020}, note={Unpublished paper} } ```
5,215
cross-encoder/quora-roberta-large
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')]) ``` You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
1,070
monologg/koelectra-base-v3-hate-speech
[ "hate", "none", "offensive" ]
Entry not found
15
fnlp/cpt-base
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- tags: - fill-mask - text2text-generation - fill-mask - text-classification - Summarization - Chinese - CPT - BART - BERT - seq2seq language: zh --- # Chinese CPT-Base ## Model description This is an implementation of CPT-Base. To use CPT, please import the file `modeling_cpt.py` (**Download** [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) that define the architecture of CPT into your project. [**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf) Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu **Github Link:** https://github.com/fastnlp/CPT ## Usage ```python >>> from modeling_cpt import CPTForConditionalGeneration >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("fnlp/cpt-base") >>> model = CPTForConditionalGeneration.from_pretrained("fnlp/cpt-base") >>> inputs = tokenizer.encode("北京是[MASK]的首都", return_tensors='pt') >>> pred_ids = model.generate(input_ids, num_beams=4, max_length=20) >>> print(tokenizer.convert_ids_to_tokens(pred_ids[i])) ['[SEP]', '[CLS]', '北', '京', '是', '中', '国', '的', '首', '都', '[SEP]'] ``` **Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.** ## Citation ```bibtex @article{shao2021cpt, title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation}, author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu}, journal={arXiv preprint arXiv:2109.05729}, year={2021} } ```
1,679
Jeevesh8/std_0pnt2_bert_ft_cola-20
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-21
null
Entry not found
15
zhayunduo/roberta-base-stocktwits-finetuned
null
--- license: apache-2.0 --- ## **Sentiment Inferencing model for stock related commments** #### *A project by NUS ISS students Frank Cao, Gerong Zhang, Jiaqi Yao, Sikai Ni, Yunduo Zhang* <br /> ### Description This model is fine tuned with roberta-base model on 3200000 comments from stocktwits, with the user labeled tags 'Bullish' or 'Bearish' try something that the individual investors may say on the investment forum on the inference API, for example, try 'red' and 'green'. [code on github](https://github.com/Gitrexx/PLPPM_Sentiment_Analysis_via_Stocktwits/tree/main/SentimentEngine) <br /> ### Training information - batch size 32 - learning rate 2e-5 | | Train loss | Validation loss | Validation accuracy | | ----------- | ----------- | ---------------- | ------------------- | | epoch1 | 0.3495 | 0.2956 | 0.8679 | | epoch2 | 0.2717 | 0.2235 | 0.9021 | | epoch3 | 0.2360 | 0.1875 | 0.9210 | | epoch4 | 0.2106 | 0.1603 | 0.9343 | <br /> # How to use ```python from transformers import RobertaForSequenceClassification, RobertaTokenizer from transformers import pipeline import pandas as pd import emoji # the model was trained upon below preprocessing def process_text(texts): # remove URLs texts = re.sub(r'https?://\S+', "", texts) texts = re.sub(r'www.\S+', "", texts) # remove ' texts = texts.replace('&#39;', "'") # remove symbol names texts = re.sub(r'(\#)(\S+)', r'hashtag_\2', texts) texts = re.sub(r'(\$)([A-Za-z]+)', r'cashtag_\2', texts) # remove usernames texts = re.sub(r'(\@)(\S+)', r'mention_\2', texts) # demojize texts = emoji.demojize(texts, delimiters=("", " ")) return texts.strip() tokenizer_loaded = RobertaTokenizer.from_pretrained('zhayunduo/roberta-base-stocktwits-finetuned') model_loaded = RobertaForSequenceClassification.from_pretrained('zhayunduo/roberta-base-stocktwits-finetuned') nlp = pipeline("text-classification", model=model_loaded, tokenizer=tokenizer_loaded) sentences = pd.Series(['just buy','just sell it', 'entity rocket to the sky!', 'go down','even though it is going up, I still think it will not keep this trend in the near future']) # sentences = list(sentences.apply(process_text)) # if input text contains https, @ or # or $ symbols, better apply preprocess to get a more accurate result sentences = list(sentences) results = nlp(sentences) print(results) # 2 labels, label 0 is bearish, label 1 is bullish ```
2,610
Jeevesh8/std_0pnt2_bert_ft_cola-22
null
Entry not found
15
Rostlab/prot_bert_bfd_membrane
[ "Soluble", "Membrane" ]
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-27
null
Entry not found
15
ynie/xlnet-large-cased-snli_mnli_fever_anli_R1_R2_R3-nli
[ "entailment", "neutral", "contradiction" ]
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-23
null
Entry not found
15
textattack/albert-base-v2-ag-news
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
## TextAttack Model CardThis `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9471052631578948, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
622
IDEA-CCNL/Erlangshen-Roberta-110M-NLI
[ "CONTRADICTION", "NEUTRAL", "ENTAILMENT" ]
--- language: - zh license: apache-2.0 tags: - bert - NLU - NLI inference: true widget: - text: "今天心情不好[SEP]今天很开心" --- # Erlangshen-Roberta-110M-NLI, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). We collect 4 NLI(Natural Language Inference) datasets in the Chinese domain for finetune, with a total of 1014787 samples. Our model is mainly based on [roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext) ## Usage ```python from transformers import BertForSequenceClassification from transformers import BertTokenizer import torch tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-NLI') model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-NLI') texta='今天的饭不好吃' textb='今天心情不好' output=model(torch.tensor([tokenizer.encode(texta,textb)])) print(torch.nn.functional.softmax(output.logits,dim=-1)) ``` ## Scores on downstream chinese tasks (without any data augmentation) | Model | cmnli | ocnli | snli | | :--------: | :-----: | :----: | :-----: | | Erlangshen-Roberta-110M-NLI | 80.83 | 78.56 | 88.01 | | Erlangshen-Roberta-330M-NLI | 82.25 | 79.82 | 88 | | Erlangshen-MegatronBert-1.3B-NLI | 84.52 | 84.17 | 88.67 | ## Citation If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
1,571
Jeevesh8/std_0pnt2_bert_ft_cola-24
null
Entry not found
15
Skoltech/russian-sensitive-topics
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_100", "LABEL_101", "LABEL_102", "LABEL_103", "LABEL_104", "LABEL_105", "LABEL_106", "LABEL_107", "LABEL_108", "LABEL_109", "LABEL_11", "LABEL_110", "LABEL_111", "LABEL_112", "LABEL_113", "LABEL_114", "LABEL_115", "LABEL_116", "LABEL_117", "LABEL_118", "LABEL_119", "LABEL_12", "LABEL_120", "LABEL_121", "LABEL_122", "LABEL_123", "LABEL_124", "LABEL_125", "LABEL_126", "LABEL_127", "LABEL_128", "LABEL_129", "LABEL_13", "LABEL_130", "LABEL_131", "LABEL_132", "LABEL_133", "LABEL_134", "LABEL_135", "LABEL_136", "LABEL_137", "LABEL_138", "LABEL_139", "LABEL_14", "LABEL_140", "LABEL_141", "LABEL_142", "LABEL_143", "LABEL_144", "LABEL_145", "LABEL_146", "LABEL_147", "LABEL_148", "LABEL_149", "LABEL_15", "LABEL_150", "LABEL_151", "LABEL_152", "LABEL_153", "LABEL_154", "LABEL_155", "LABEL_156", "LABEL_157", "LABEL_158", "LABEL_159", "LABEL_16", "LABEL_160", "LABEL_161", "LABEL_162", "LABEL_163", "LABEL_164", "LABEL_165", "LABEL_166", "LABEL_167", "LABEL_168", "LABEL_169", "LABEL_17", "LABEL_170", "LABEL_171", "LABEL_172", "LABEL_173", "LABEL_174", "LABEL_175", "LABEL_176", "LABEL_177", "LABEL_178", "LABEL_179", "LABEL_18", "LABEL_180", "LABEL_181", "LABEL_182", "LABEL_183", "LABEL_184", "LABEL_185", "LABEL_186", "LABEL_187", "LABEL_188", "LABEL_189", "LABEL_19", "LABEL_190", "LABEL_191", "LABEL_192", "LABEL_193", "LABEL_194", "LABEL_195", "LABEL_196", "LABEL_197", "LABEL_198", "LABEL_199", "LABEL_2", "LABEL_20", "LABEL_200", "LABEL_201", "LABEL_202", "LABEL_203", "LABEL_204", "LABEL_205", "LABEL_206", "LABEL_207", "LABEL_208", "LABEL_209", "LABEL_21", "LABEL_210", "LABEL_211", "LABEL_212", "LABEL_213", "LABEL_214", "LABEL_215", "LABEL_216", "LABEL_217", "LABEL_218", "LABEL_219", "LABEL_22", "LABEL_220", "LABEL_221", "LABEL_222", "LABEL_223", "LABEL_224", "LABEL_225", "LABEL_226", "LABEL_227", "LABEL_228", "LABEL_229", "LABEL_23", "LABEL_230", "LABEL_231", "LABEL_232", "LABEL_233", "LABEL_234", "LABEL_235", "LABEL_236", "LABEL_237", "LABEL_238", "LABEL_239", "LABEL_24", "LABEL_240", "LABEL_241", "LABEL_242", "LABEL_243", "LABEL_244", "LABEL_245", "LABEL_246", "LABEL_247", "LABEL_248", "LABEL_249", "LABEL_25", "LABEL_250", "LABEL_251", "LABEL_252", "LABEL_253", "LABEL_254", "LABEL_255", "LABEL_256", "LABEL_257", "LABEL_258", "LABEL_259", "LABEL_26", "LABEL_260", "LABEL_261", "LABEL_262", "LABEL_263", "LABEL_264", "LABEL_265", "LABEL_266", "LABEL_267", "LABEL_268", "LABEL_269", "LABEL_27", "LABEL_270", "LABEL_271", "LABEL_272", "LABEL_273", "LABEL_274", "LABEL_275", "LABEL_276", "LABEL_277", "LABEL_278", "LABEL_279", "LABEL_28", "LABEL_280", "LABEL_281", "LABEL_282", "LABEL_283", "LABEL_284", "LABEL_285", "LABEL_286", "LABEL_287", "LABEL_288", "LABEL_289", "LABEL_29", "LABEL_290", "LABEL_291", "LABEL_292", "LABEL_293", "LABEL_294", "LABEL_295", "LABEL_296", "LABEL_297", "LABEL_298", "LABEL_299", "LABEL_3", "LABEL_30", "LABEL_300", "LABEL_301", "LABEL_302", "LABEL_303", "LABEL_304", "LABEL_305", "LABEL_306", "LABEL_307", "LABEL_308", "LABEL_309", "LABEL_31", "LABEL_310", "LABEL_311", "LABEL_312", "LABEL_313", "LABEL_314", "LABEL_315", "LABEL_316", "LABEL_317", "LABEL_318", "LABEL_319", "LABEL_32", "LABEL_320", "LABEL_321", "LABEL_322", "LABEL_323", "LABEL_324", "LABEL_325", "LABEL_326", "LABEL_327", "LABEL_328", "LABEL_329", "LABEL_33", "LABEL_330", "LABEL_331", "LABEL_332", "LABEL_333", "LABEL_334", "LABEL_335", "LABEL_336", "LABEL_337", "LABEL_338", "LABEL_339", "LABEL_34", "LABEL_340", "LABEL_341", "LABEL_342", "LABEL_343", "LABEL_344", "LABEL_345", "LABEL_346", "LABEL_347", "LABEL_348", "LABEL_349", "LABEL_35", "LABEL_350", "LABEL_351", "LABEL_352", "LABEL_353", "LABEL_354", "LABEL_355", "LABEL_356", "LABEL_357", "LABEL_358", "LABEL_359", "LABEL_36", "LABEL_360", "LABEL_361", "LABEL_362", "LABEL_363", "LABEL_364", "LABEL_365", "LABEL_366", "LABEL_367", "LABEL_368", "LABEL_369", "LABEL_37", "LABEL_370", "LABEL_371", "LABEL_372", "LABEL_373", "LABEL_374", "LABEL_375", "LABEL_376", "LABEL_377", "LABEL_378", "LABEL_379", "LABEL_38", "LABEL_380", "LABEL_381", "LABEL_382", "LABEL_383", "LABEL_384", "LABEL_385", "LABEL_386", "LABEL_387", "LABEL_388", "LABEL_389", "LABEL_39", "LABEL_390", "LABEL_391", "LABEL_392", "LABEL_4", "LABEL_40", "LABEL_41", "LABEL_42", "LABEL_43", "LABEL_44", "LABEL_45", "LABEL_46", "LABEL_47", "LABEL_48", "LABEL_49", "LABEL_5", "LABEL_50", "LABEL_51", "LABEL_52", "LABEL_53", "LABEL_54", "LABEL_55", "LABEL_56", "LABEL_57", "LABEL_58", "LABEL_59", "LABEL_6", "LABEL_60", "LABEL_61", "LABEL_62", "LABEL_63", "LABEL_64", "LABEL_65", "LABEL_66", "LABEL_67", "LABEL_68", "LABEL_69", "LABEL_7", "LABEL_70", "LABEL_71", "LABEL_72", "LABEL_73", "LABEL_74", "LABEL_75", "LABEL_76", "LABEL_77", "LABEL_78", "LABEL_79", "LABEL_8", "LABEL_80", "LABEL_81", "LABEL_82", "LABEL_83", "LABEL_84", "LABEL_85", "LABEL_86", "LABEL_87", "LABEL_88", "LABEL_89", "LABEL_9", "LABEL_90", "LABEL_91", "LABEL_92", "LABEL_93", "LABEL_94", "LABEL_95", "LABEL_96", "LABEL_97", "LABEL_98", "LABEL_99" ]
--- language: - ru tags: - toxic comments classification licenses: - cc-by-nc-sa --- ## General concept of the model This model is trained on the dataset of sensitive topics of the Russian language. The concept of sensitive topics is described [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. Please note that this article describes the first version of the dataset, while the model is trained on the extended version of the dataset open-sourced on our [GitHub](https://github.com/skoltech-nlp/inappropriate-sensitive-topics/blob/main/Version2/sensitive_topics/sensitive_topics.csv) or on [kaggle](https://www.kaggle.com/nigula/russian-sensitive-topics). The properties of the dataset is the same as the one described in the article, the only difference is the size. ## Instructions The model predicts combinations of 18 sensitive topics described in the [article](https://arxiv.org/abs/2103.05345). You can find step-by-step instructions for using the model [here](https://github.com/skoltech-nlp/inappropriate-sensitive-topics/blob/main/Version2/sensitive_topics/Inference.ipynb) ## Metrics The dataset partially manually labeled samples and partially semi-automatically labeled samples. Learn more in our article. We tested the performance of the classifier only on the part of manually labeled data that is why some topics are not well represented in the test set. | | precision | recall | f1-score | support | |-------------------|-----------|--------|----------|---------| | offline_crime | 0.65 | 0.55 | 0.6 | 132 | | online_crime | 0.5 | 0.46 | 0.48 | 37 | | drugs | 0.87 | 0.9 | 0.88 | 87 | | gambling | 0.5 | 0.67 | 0.57 | 6 | | pornography | 0.73 | 0.59 | 0.65 | 204 | | prostitution | 0.75 | 0.69 | 0.72 | 91 | | slavery | 0.72 | 0.72 | 0.73 | 40 | | suicide | 0.33 | 0.29 | 0.31 | 7 | | terrorism | 0.68 | 0.57 | 0.62 | 47 | | weapons | 0.89 | 0.83 | 0.86 | 138 | | body_shaming | 0.9 | 0.67 | 0.77 | 109 | | health_shaming | 0.84 | 0.55 | 0.66 | 108 | | politics | 0.68 | 0.54 | 0.6 | 241 | | racism | 0.81 | 0.59 | 0.68 | 204 | | religion | 0.94 | 0.72 | 0.81 | 102 | | sexual_minorities | 0.69 | 0.46 | 0.55 | 102 | | sexism | 0.66 | 0.64 | 0.65 | 132 | | social_injustice | 0.56 | 0.37 | 0.45 | 181 | | none | 0.62 | 0.67 | 0.64 | 250 | | micro avg | 0.72 | 0.61 | 0.66 | 2218 | | macro avg | 0.7 | 0.6 | 0.64 | 2218 | | weighted avg | 0.73 | 0.61 | 0.66 | 2218 | | samples avg | 0.75 | 0.66 | 0.68 | 2218 | ## Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png ## Citation If you find this repository helpful, feel free to cite our publication: ``` @inproceedings{babakov-etal-2021-detecting, title = "Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company{'}s Reputation", author = "Babakov, Nikolay and Logacheva, Varvara and Kozlova, Olga and Semenov, Nikita and Panchenko, Alexander", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.4", pages = "26--36", abstract = "Not all topics are equally {``}flammable{''} in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion of politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim at defining a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets for Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data.", } ```
5,040
roberta-large-openai-detector
null
--- language: en license: mit tags: - exbert datasets: - bookcorpus - wikipedia --- # RoBERTa Large OpenAI Detector ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) - [How To Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version. - **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list - **Model Type:** Fine-tuned transformer-based language model - **Language(s):** English - **License:** MIT - **Related Models:** [RoBERTa large](https://huggingface.co/roberta-large), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection). - [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) - [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/) - [Explore the detector model here](https://huggingface.co/openai-detector ) ## Uses #### Direct Use The model is a classifier that can be used to detect text generated by GPT-2 models. #### Downstream Use The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. #### Risks and Limitations In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: > We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness. #### Bias Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa large and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa large](https://huggingface.co/roberta-large) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf). ## Training #### Training Data The model is a sequence classifier based on RoBERTa large (see the [RoBERTa large model card](https://huggingface.co/roberta-large) for more details on the RoBERTa large training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)). #### Training Procedure The model developers write that: > We based a sequence classifier on RoBERTaLARGE (355 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model. They later state: > To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf). #### Testing Data, Factors and Metrics The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: > testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training. #### Results The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf): > Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results. ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications The model developers write that: See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details. ## Citation Information ```bibtex @article{solaiman2019release, title={Release strategies and the social impacts of language models}, author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others}, journal={arXiv preprint arXiv:1908.09203}, year={2019} } ``` APA: - Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203. ## Model Card Authors This model card was written by the team at Hugging Face. ## How to Get Started with the Model More information needed
9,182
Jeevesh8/std_0pnt2_bert_ft_cola-25
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-26
null
Entry not found
15
eleldar/theme-classification
[ "contradiction", "entailment", "neutral" ]
--- license: mit thumbnail: https://huggingface.co/front/thumbnails/facebook.png pipeline_tag: zero-shot-classification datasets: - multi_nli --- # Clone from [https://huggingface.co/facebook/bart-large-mnli](bart-large-mnli) This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/datasets/multi_nli) dataset. Additional information about this model: - The [bart-large](https://huggingface.co/facebook/bart-large) model page - [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension ](https://arxiv.org/abs/1910.13461) - [BART fairseq implementation](https://github.com/pytorch/fairseq/tree/master/fairseq/models/bart) ## NLI-based Zero Shot Text Classification [Yin et al.](https://arxiv.org/abs/1909.00161) proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class "politics", we could construct a hypothesis of `This text is about politics.`. The probabilities for entailment and contradiction are then converted to label probabilities. This method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html) for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code. #### With the zero-shot classification pipeline The model can be loaded with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing'] classifier(sequence_to_classify, candidate_labels) #{'labels': ['travel', 'dancing', 'cooking'], # 'scores': [0.9938651323318481, 0.0032737774308770895, 0.002861034357920289], # 'sequence': 'one day I will see the world'} ``` If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently: ```python candidate_labels = ['travel', 'cooking', 'dancing', 'exploration'] classifier(sequence_to_classify, candidate_labels, multi_class=True) #{'labels': ['travel', 'exploration', 'dancing', 'cooking'], # 'scores': [0.9945111274719238, # 0.9383890628814697, # 0.0057061901316046715, # 0.0018193122232332826], # 'sequence': 'one day I will see the world'} ``` #### With manual PyTorch ```python # pose sequence as a NLI premise and label as a hypothesis from transformers import AutoModelForSequenceClassification, AutoTokenizer nli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli') tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli') premise = sequence hypothesis = f'This example is {label}.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', truncation_strategy='only_first') logits = nli_model(x.to(device))[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (2) as the probability of the label being true entail_contradiction_logits = logits[:,[0,2]] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,1] ```
3,855
Jeevesh8/std_0pnt2_bert_ft_cola-31
null
Entry not found
15
microsoft/DialogRPT-human-vs-machine
null
# Demo Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) | Context | Response | `human_vs_machine` score | | :------ | :------- | :------------: | | I love NLP! | I'm not sure if it's a good idea. | 0.000 | | I love NLP! | Me too! | 0.605 | The `human_vs_machine` score predicts how likely the response is from a human rather than a machine. # DialogRPT-human-vs-machine ### Dialog Ranking Pretrained Transformers > How likely a dialog response is upvoted 👍 and/or gets replied 💬? This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict. It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates. Quick Links: * [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/) * [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT) * [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) We considered the following tasks and provided corresponding pretrained models. |Task | Description | Pretrained model | | :------------- | :----------- | :-----------: | | **Human feedback** | **given a context and its two human responses, predict...**| | `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) | | `width`| ... which gets more direct replies? | [model card](https://huggingface.co/microsoft/DialogRPT-width) | | `depth`| ... which gets longer follow-up thread? | [model card](https://huggingface.co/microsoft/DialogRPT-depth) | | **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** | | `human_vs_rand`| ... a random human response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-rand) | | `human_vs_machine`| ... a machine generated response | this model | ### Contact: Please create an issue on [our repo](https://github.com/golsun/DialogRPT) ### Citation: ``` @inproceedings{gao2020dialogrpt, title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data}, author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan}, year={2020}, booktitle={EMNLP} } ```
2,636
cross-encoder/mmarco-mMiniLMv2-L12-H384-v1
[ "LABEL_0" ]
--- license: apache-2.0 language: - en - ar - zh - nl - fr - de - hi - in - it - ja - pt - ru - es - vi - multilingual datasets: - unicamp-dl/mmarco --- # Cross-Encoder for multilingual MS Marco This model was trained on the [MMARCO](https://hf.co/unicamp-dl/mmarco) dataset. It is a machine translated version of MS MARCO using Google Translate. It was translated to 14 languages. In our experiments, we observed that it performs also well for other languages. As a base model, we used the [multilingual MiniLMv2](https://huggingface.co/nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large) model. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with SentenceTransformers The usage becomes easy when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ```
2,130
Jeevesh8/std_0pnt2_bert_ft_cola-28
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-29
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-30
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-35
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-32
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-33
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-34
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-36
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-37
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-38
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-39
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-40
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-41
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-43
null
Entry not found
15
howey/roberta-large-qqp
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-42
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-44
null
Entry not found
15
ptaszynski/yacis-electra-small-japanese-cyberbullying
null
--- language: ja license: cc-by-sa-4.0 datasets: - YACIS corpus - Harmful BBS Japanese comments dataset - Twitter Japanese cyberbullying dataset --- # yacis-electra-small-cyberbullying This is an [ELECTRA](https://github.com/google-research/electra) Small model for the Japanese language finetuned for automatic cyberbullying detection. The original foundation model was originally pretrained on 5.6 billion words [YACIS](https://github.com/ptaszynski/yacis-corpus) blog corpus, and later finetuned on a balanced dataset created by unifying two datasets, namely "Harmful BBS Japanese comments dataset" and "Twitter Japanese cyberbullying dataset". ## Model architecture The original model was pretrained using ELECTRA Small model settings and can be found here: [https://huggingface.co/ptaszynski/yacis-electra-small-japanese](https://huggingface.co/ptaszynski/yacis-electra-small-japanese) ## Licenses The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License. <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a> ## Citations Please, cite this model using the following citation. ``` @inproceedings{shibata2022yacis-electra, title={日本語大規模ブログコーパスYACISに基づいたELECTRA事前学習済み言語モデルの作成及び性能評価}, % title={Development and performance evaluation of ELECTRA pretrained language model based on YACIS large-scale Japanese blog corpus [in Japanese]}, %% for English citations author={柴田 祥伍 and プタシンスキ ミハウ and エロネン ユーソ and ノヴァコフスキ カロル and 桝井 文人}, % author={Shibata, Shogo and Ptaszynski, Michal and Eronen, Juuso and Nowakowski, Karol and Masui, Fumito}, %% for English citations booktitle={言語処理学会第28回年次大会(NLP2022) (予定)}, % booktitle={Proceedings of The 28th Annual Meeting of The Association for Natural Language Processing (NLP2022)}, %% for English citations pages={1--4}, year={2022} } ``` The two datasets used for finetuning should be cited using the following references. - Harmful BBS Japanese comments dataset: ``` @book{ptaszynski2018automatic, title={Automatic Cyberbullying Detection: Emerging Research and Opportunities: Emerging Research and Opportunities}, author={Ptaszynski, Michal E and Masui, Fumito}, year={2018}, publisher={IGI Global} } ``` ``` @article{松葉達明2009学校非公式サイトにおける有害情報検出, title={学校非公式サイトにおける有害情報検出}, author={松葉達明 and 里見尚宏 and 桝井文人 and 河合敦夫 and 井須尚紀}, journal={電子情報通信学会技術研究報告. NLC, 言語理解とコミュニケーション}, volume={109}, number={142}, pages={93--98}, year={2009}, publisher={一般社団法人電子情報通信学会} } ``` - Twitter Japanese cyberbullying dataset: ``` TBA ``` The pretraining was done using YACIS corpus, which should be cited using at least one of the following references. ``` @inproceedings{ptaszynski2012yacis, title={YACIS: A five-billion-word corpus of Japanese blogs fully annotated with syntactic and affective information}, author={Ptaszynski, Michal and Dybala, Pawel and Rzepka, Rafal and Araki, Kenji and Momouchi, Yoshio}, booktitle={Proceedings of the AISB/IACAP world congress}, pages={40--49}, year={2012}, howpublished = "\url{https://github.com/ptaszynski/yacis-corpus}" } ``` ``` @article{ptaszynski2014automatically, title={Automatically annotating a five-billion-word corpus of Japanese blogs for sentiment and affect analysis}, author={Ptaszynski, Michal and Rzepka, Rafal and Araki, Kenji and Momouchi, Yoshio}, journal={Computer Speech \& Language}, volume={28}, number={1}, pages={38--55}, year={2014}, publisher={Elsevier}, howpublished = "\url{https://github.com/ptaszynski/yacis-corpus}" } ```
3,795
cmarkea/distilcamembert-base-nli
[ "contradiction", "entailment", "neutral" ]
--- language: fr license: mit tags: - zero-shot-classification - sentence-similarity - nli pipeline_tag: zero-shot-classification widget: - text: "Selon certains physiciens, un univers parallèle, miroir du nôtre ou relevant de ce que l'on appelle la théorie des branes, autoriserait des neutrons à sortir de notre Univers pour y entrer à nouveau. L'idée a été testée une nouvelle fois avec le réacteur nucléaire de l'Institut Laue-Langevin à Grenoble, plus précisément en utilisant le détecteur de l'expérience Stereo initialement conçu pour chasser des particules de matière noire potentielles, les neutrinos stériles." candidate_labels: "politique, science, sport, santé" hypothesis_template: "Ce texte parle de {}." datasets: - flue --- DistilCamemBERT-NLI =================== We present DistilCamemBERT-NLI which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the Natural Language Inference (NLI) task for the french language, also known as recognizing textual entailment (RTE). This model is constructed on the XNLI dataset which consists to determine whether a premise entails, contradicts or neither entails nor contradicts a hypothesis. This modelization is close to [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase for example. Indeed, inference cost can be a technological issue especially as in a context of cross-encoding like for this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power thanks to DistilCamemBERT. Dataset ------- The dataset XNLI from [FLUE](https://huggingface.co/datasets/flue) is composed of 392,702 premises with their hypothesis for the train and 5,010 couples for the test. The goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B?) and is a classification task (given two sentences, predict one of three labels). The sentence A is called *premise* and sentence B is called *hypothesis*, then the goal of modelization is determined as follows: $$P(premise=c\in\{contradiction, entailment, neutral\}\vert hypothesis)$$ Evaluation results ------------------ | **class** | **precision (%)** | **f1-score (%)** | **support** | | :----------------: | :---------------: | :--------------: | :---------: | | **global** | 77.70 | 77.45 | 5,010 | | **contradiction** | 78.00 | 79.54 | 1,670 | | **entailment** | 82.90 | 78.87 | 1,670 | | **neutral** | 72.18 | 74.04 | 1,670 | Benchmark --------- We compare the [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) model to 2 other modelizations working on french language. The first one [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) is based on well named [CamemBERT](https://huggingface.co/camembert-base), the french RoBERTa model and the second one [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) based on [mDeBERTav3](https://huggingface.co/microsoft/mdeberta-v3-base) a multilingual model. To compare the performances the metrics of accuracy and [MCC (Matthews Correlation Coefficient)](https://en.wikipedia.org/wiki/Phi_coefficient) was used and for the mean inference time measure, an **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** was used: | **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** | | :--------------: | :-----------: | :--------------: | :------------: | | [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **51.35** | 77.45 | 66.24 | | [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 105.0 | 81.72 | 72.67 | | [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 299.18 | **83.43** | **75.15** | Zero-shot classification ------------------------ The main advantage of such modelization is to create a zero-shot classifier allowing text classification without training. This task can be summarized by: $$P(hypothesis=i\in\mathcal{C}|premise)=\frac{e^{P(premise=entailment\vert hypothesis=i)}}{\sum_{j\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis=j)}}$$ For this part, we use 2 datasets, the first one: [allocine](https://huggingface.co/datasets/allocine) used to train the sentiment analysis models. The dataset is composed of 2 classes: "positif" and "négatif" appreciation of movies reviews. Here we use "Ce commentaire est {}." as the hypothesis template and "positif" and "négatif" as candidate labels. | **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** | | :--------------: | :-----------: | :--------------: | :------------: | | [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **195.54** | 80.59 | 63.71 | | [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 378.39 | **86.37** | **73.74** | | [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 520.58 | 84.97 | 70.05 | The second one: [mlsum](https://huggingface.co/datasets/mlsum) used to train the summarization models. We use the articles summary part to predict their topics. In this aim, we aggregate sub-topics and select a few of them. In this case, the hypothesis template used is "C'est un article traitant de {}." and the candidate labels are: "économie", "politique", "sport" and "science". | **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** | | :--------------: | :-----------: | :--------------: | :------------: | | [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **217.77** | **79.30** | **70.55** | | [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 448.27 | 70.7 | 64.10 | | [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 591.34 | 64.45 | 58.67 | How to use DistilCamemBERT-NLI ------------------------------ ```python from transformers import pipeline classifier = pipeline( task='zero-shot-classification', model="cmarkea/distilcamembert-base-nli", tokenizer="cmarkea/distilcamembert-base-nli" ) result = classifier ( sequences="Le style très cinéphile de Quentin Tarantino " "se reconnaît entre autres par sa narration postmoderne " "et non linéaire, ses dialogues travaillés souvent " "émaillés de références à la culture populaire, et ses " "scènes hautement esthétiques mais d'une violence " "extrême, inspirées de films d'exploitation, d'arts " "martiaux ou de western spaghetti.", candidate_labels="cinéma, technologie, littérature, politique", hypothesis_template="Ce texte parle de {}." ) result {"labels": ["cinéma", "littérature", "technologie", "politique"], "scores": [0.7164115309715271, 0.12878799438476562, 0.1092301607131958, 0.0455702543258667]} ``` Citation -------- ```bibtex @inproceedings{delestre:hal-03674695, TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}}, AUTHOR = {Delestre, Cyrile and Amar, Abibatou}, URL = {https://hal.archives-ouvertes.fr/hal-03674695}, BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}}, ADDRESS = {Vannes, France}, YEAR = {2022}, MONTH = Jul, KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation}, PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf}, HAL_ID = {hal-03674695}, HAL_VERSION = {v1}, } ```
8,208
Jeevesh8/std_0pnt2_bert_ft_cola-45
null
Entry not found
15
bhadresh-savani/distilbert-base-uncased-sentiment-sst2
[ "NEGATIVE", "POSITIVE" ]
--- language: en license: apache-2.0 datasets: - sst2 --- # distilbert-base-uncased-sentiment-sst2 This model will be able to identify positivity or negativity present in the sentence ## Dataset: The Stanford Sentiment Treebank from GLUE ## Results: ``` ***** eval metrics ***** epoch = 3.0 eval_accuracy = 0.9094 eval_loss = 0.3514 eval_runtime = 0:00:03.60 eval_samples = 872 eval_samples_per_second = 242.129 eval_steps_per_second = 30.266 ```
557
Jeevesh8/std_0pnt2_bert_ft_cola-48
null
Entry not found
15
techthiyanes/chinese_sentiment
[ "star 1", "star 2", "star 3", "star 4", "star 5" ]
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-47
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-46
null
Entry not found
15
geckos/bart-fined-tuned-on-entailment-classification
[ "contradiction", "entailment", "neutral" ]
Entry not found
15
howey/roberta-large-qnli
null
Entry not found
15
navteca/bart-large-mnli
[ "contradiction", "neutral", "entailment" ]
--- datasets: - multi_nli language: en license: mit pipeline_tag: zero-shot-classification tags: - bart - zero-shot-classification --- # Bart large model for NLI-based Zero Shot Text Classification This model uses [bart-large](https://huggingface.co/facebook/bart-large). ## Training Data This model was trained on the [MultiNLI (MNLI)](https://huggingface.co/datasets/multi_nli) dataset in the manner originally described in [Yin et al. 2019](https://arxiv.org/abs/1909.00161). It can be used to predict whether a topic label can be assigned to a given sequence, whether or not the label has been seen before. ## Usage and Performance The trained model can be used like this: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline # Load model & tokenizer bart_model = AutoModelForSequenceClassification.from_pretrained('navteca/bart-large-mnli') bart_tokenizer = AutoTokenizer.from_pretrained('navteca/bart-large-mnli') # Get predictions nlp = pipeline('zero-shot-classification', model=bart_model, tokenizer=bart_tokenizer) sequence = 'One day I will see the world.' candidate_labels = ['cooking', 'dancing', 'travel'] result = nlp(sequence, candidate_labels, multi_label=True) print(result) #{ # "sequence": "One day I will see the world.", # "labels": [ # "travel", # "dancing", # "cooking" # ], # "scores": [ # 0.9941897988319397, # 0.0060537424869835, # 0.0020010927692056 # ] #} ```
1,463
Jeevesh8/std_0pnt2_bert_ft_cola-49
null
Entry not found
15
prajjwal1/bert-tiny-mnli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI. If you use the model, please consider citing the paper ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). ``` MNLI: 60% MNLI-mm: 61.61% ``` These models were trained for 4 epochs. [@prajjwal_1](https://twitter.com/prajjwal_1)
992
Jeevesh8/std_0pnt2_bert_ft_cola-50
null
Entry not found
15
wietsedv/bert-base-dutch-cased-finetuned-sentiment
[ "neg", "pos" ]
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-51
null
Entry not found
15
Jeevesh8/std_0pnt2_bert_ft_cola-52
null
Entry not found
15