modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
l3cube-pune/MarathiSentiment | [
"Negative",
"Neutral",
"Positive"
] | ---
language: mr
tags:
- albert
license: cc-by-4.0
datasets:
- L3CubeMahaSent
widget:
- text: "I like you. </s></s> I love you."
---
## MarathiSentiment
MarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (http://arxiv.org/abs/2103.11408)
```
@inproceedings{kulkarni2021l3cubemahasent,
title={L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset},
author={Kulkarni, Atharva and Mandhane, Meet and Likhitkar, Manali and Kshirsagar, Gayatri and Joshi, Raviraj},
booktitle={Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis},
pages={213--220},
year={2021}
}
``` | 888 |
erst/xlm-roberta-base-finetuned-nace | [
"0111",
"0112",
"0113",
"0114",
"0115",
"0116",
"0119",
"0121",
"0122",
"0123",
"0124",
"0125",
"0126",
"0127",
"0128",
"0129",
"0130",
"0141",
"0142",
"0143",
"0144",
"0145",
"0146",
"0147",
"0149",
"0150",
"0161",
"0162",
"0163",
"0164",
"0170",
"0210",
"0220",
"0230",
"0240",
"0311",
"0312",
"0321",
"0322",
"0510",
"0520",
"0610",
"0620",
"0710",
"0721",
"0729",
"0811",
"0812",
"0891",
"0892",
"0893",
"0899",
"0910",
"0990",
"1011",
"1012",
"1013",
"1020",
"1031",
"1032",
"1039",
"1041",
"1042",
"1051",
"1052",
"1061",
"1062",
"1071",
"1072",
"1073",
"1081",
"1082",
"1083",
"1084",
"1085",
"1086",
"1089",
"1091",
"1092",
"1101",
"1102",
"1103",
"1104",
"1105",
"1106",
"1107",
"1200",
"1310",
"1320",
"1330",
"1391",
"1392",
"1393",
"1394",
"1395",
"1396",
"1399",
"1411",
"1412",
"1413",
"1414",
"1419",
"1420",
"1431",
"1439",
"1511",
"1512",
"1520",
"1610",
"1621",
"1622",
"1623",
"1624",
"1629",
"1711",
"1712",
"1721",
"1722",
"1723",
"1724",
"1729",
"1811",
"1812",
"1813",
"1814",
"1820",
"1910",
"1920",
"2011",
"2012",
"2013",
"2014",
"2015",
"2016",
"2017",
"2020",
"2030",
"2041",
"2042",
"2051",
"2052",
"2053",
"2059",
"2060",
"2110",
"2120",
"2211",
"2219",
"2221",
"2222",
"2223",
"2229",
"2311",
"2312",
"2313",
"2314",
"2319",
"2320",
"2331",
"2332",
"2341",
"2342",
"2343",
"2344",
"2349",
"2351",
"2352",
"2361",
"2362",
"2363",
"2364",
"2365",
"2369",
"2370",
"2391",
"2399",
"2410",
"2420",
"2431",
"2432",
"2433",
"2434",
"2441",
"2442",
"2443",
"2444",
"2445",
"2446",
"2451",
"2452",
"2453",
"2454",
"2511",
"2512",
"2521",
"2529",
"2530",
"2540",
"2550",
"2561",
"2562",
"2571",
"2572",
"2573",
"2591",
"2592",
"2593",
"2594",
"2599",
"2611",
"2612",
"2620",
"2630",
"2640",
"2651",
"2652",
"2660",
"2670",
"2680",
"2711",
"2712",
"2720",
"2731",
"2732",
"2733",
"2740",
"2751",
"2752",
"2790",
"2811",
"2812",
"2813",
"2814",
"2815",
"2821",
"2822",
"2823",
"2824",
"2825",
"2829",
"2830",
"2841",
"2849",
"2891",
"2892",
"2893",
"2894",
"2895",
"2896",
"2899",
"2910",
"2920",
"2931",
"2932",
"3011",
"3012",
"3020",
"3030",
"3040",
"3091",
"3092",
"3099",
"3101",
"3102",
"3103",
"3109",
"3211",
"3212",
"3213",
"3220",
"3230",
"3240",
"3250",
"3291",
"3299",
"3311",
"3312",
"3313",
"3314",
"3315",
"3316",
"3317",
"3319",
"3320",
"3511",
"3512",
"3513",
"3514",
"3521",
"3522",
"3523",
"3530",
"3600",
"3700",
"3811",
"3812",
"3821",
"3822",
"3831",
"3832",
"3900",
"4110",
"4120",
"4211",
"4212",
"4213",
"4221",
"4222",
"4291",
"4299",
"4311",
"4312",
"4313",
"4321",
"4322",
"4329",
"4331",
"4332",
"4333",
"4334",
"4339",
"4391",
"4399",
"4511",
"4519",
"4520",
"4531",
"4532",
"4540",
"4611",
"4612",
"4613",
"4614",
"4615",
"4616",
"4617",
"4618",
"4619",
"4621",
"4622",
"4623",
"4624",
"4631",
"4632",
"4633",
"4634",
"4635",
"4636",
"4637",
"4638",
"4639",
"4641",
"4642",
"4643",
"4644",
"4645",
"4646",
"4647",
"4648",
"4649",
"4651",
"4652",
"4661",
"4662",
"4663",
"4664",
"4665",
"4666",
"4669",
"4671",
"4672",
"4673",
"4674",
"4675",
"4676",
"4677",
"4690",
"4711",
"4719",
"4721",
"4722",
"4723",
"4724",
"4725",
"4726",
"4729",
"4730",
"4741",
"4742",
"4743",
"4751",
"4752",
"4753",
"4754",
"4759",
"4761",
"4762",
"4763",
"4764",
"4765",
"4771",
"4772",
"4773",
"4774",
"4775",
"4776",
"4777",
"4778",
"4779",
"4781",
"4782",
"4789",
"4791",
"4799",
"4910",
"4920",
"4931",
"4932",
"4939",
"4941",
"4942",
"4950",
"5010",
"5020",
"5030",
"5040",
"5110",
"5121",
"5122",
"5210",
"5221",
"5222",
"5223",
"5224",
"5229",
"5310",
"5320",
"5510",
"5520",
"5530",
"5590",
"5610",
"5621",
"5629",
"5630",
"5811",
"5812",
"5813",
"5814",
"5819",
"5821",
"5829",
"5911",
"5912",
"5913",
"5914",
"5920",
"6010",
"6020",
"6110",
"6120",
"6130",
"6190",
"6201",
"6202",
"6203",
"6209",
"6311",
"6312",
"6391",
"6399",
"6411",
"6419",
"6420",
"6430",
"6491",
"6492",
"6499",
"6511",
"6512",
"6520",
"6530",
"6611",
"6612",
"6619",
"6621",
"6622",
"6629",
"6630",
"6810",
"6820",
"6831",
"6832",
"6910",
"6920",
"7010",
"7021",
"7022",
"7111",
"7112",
"7120",
"7211",
"7219",
"7220",
"7311",
"7312",
"7320",
"7410",
"7420",
"7430",
"7490",
"7500",
"7711",
"7712",
"7721",
"7722",
"7729",
"7731",
"7732",
"7733",
"7734",
"7735",
"7739",
"7740",
"7810",
"7820",
"7830",
"7911",
"7912",
"7990",
"8010",
"8020",
"8030",
"8110",
"8121",
"8122",
"8129",
"8130",
"8211",
"8219",
"8220",
"8230",
"8291",
"8292",
"8299",
"8411",
"8412",
"8413",
"8421",
"8422",
"8423",
"8424",
"8425",
"8430",
"8510",
"8520",
"8531",
"8532",
"8541",
"8542",
"8551",
"8552",
"8553",
"8559",
"8560",
"8610",
"8621",
"8622",
"8623",
"8690",
"8710",
"8720",
"8730",
"8790",
"8810",
"8891",
"8899",
"9001",
"9002",
"9003",
"9004",
"9101",
"9102",
"9103",
"9104",
"9200",
"9311",
"9312",
"9313",
"9319",
"9321",
"9329",
"9411",
"9412",
"9420",
"9491",
"9492",
"9499",
"9511",
"9512",
"9521",
"9522",
"9523",
"9524",
"9525",
"9529",
"9601",
"9602",
"9603",
"9604",
"9609",
"9700",
"9810",
"9820",
"9900"
] | # Classifying Text into NACE Codes
This model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned to classify descriptions of activities into [NACE Rev. 2](https://ec.europa.eu/eurostat/web/nace-rev2) codes.
## Data
The data used to fine-tune the model consist of 2.5 million descriptions of activities from Norwegian and Danish businesses. To improve the model's multilingual performance, random samples of the Norwegian and Danish descriptions were machine translated into the following languages:
- English
- German
- Spanish
- French
- Finnish
- Polish
## Quick Start
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("erst/xlm-roberta-base-finetuned-nace")
model = AutoModelForSequenceClassification.from_pretrained("erst/xlm-roberta-base-finetuned-nace")
pl = pipeline(
"sentiment-analysis",
model=model,
tokenizer=tokenizer,
return_all_scores=False,
)
pl("The purpose of our company is to build houses")
```
| 1,049 |
khalidalt/DeBERTa-v3-large-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "The Movie have been criticized for the story. However, I think it is a great movie. [SEP] I liked the movie."
---
# DeBERTa-v3-large-mnli
## Model description
This model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information.
The model used is [DeBERTa-v3-large from Microsoft](https://huggingface.co/microsoft/deberta-large). The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on [official repository](https://github.com/microsoft/DeBERTa) and the [paper](https://arxiv.org/abs/2006.03654)
## Intended uses & limitations
#### How to use the model
```python
premise = "The Movie have been criticized for the story. However, I think it is a great movie."
hypothesis = "I liked the movie."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1)
label_names = ["entailment", "neutral", "contradiction"]
print(label_names[prediction.argmax(0).tolist()])
```
### Training data
This model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement.
### Training procedure
DeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters.
```
train_args = TrainingArguments(
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=3,
warmup_ratio=0.06,
weight_decay=0.1,
fp16=True,
seed=42,
)
```
### BibTeX entry and citation info
Please cite the [DeBERTa paper](https://arxiv.org/abs/2006.03654) and [MultiNLI Dataset](https://cims.nyu.edu/~sbowman/multinli/paper.pdf) if you use this model and include this Huggingface hub. | 2,132 |
transformersbook/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9271664736493986
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. The model is trained in Chapter 2: Text Classification in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/02_classification.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.2192
- Accuracy: 0.927
- F1: 0.9272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8569 | 1.0 | 250 | 0.3386 | 0.894 | 0.8888 |
| 0.2639 | 2.0 | 500 | 0.2192 | 0.927 | 0.9272 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
| 2,137 |
sismetanin/xlm_roberta_base-ru-sentiment-rusentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## XML-RoBERTa-Base-ru-sentiment-RuSentiment
XML-RoBERTa-Base-ru-sentiment-RuSentiment is a [XML-RoBERTa-Base](https://huggingface.co/xlm-roberta-base) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@inproceedings{rogers2018rusentiment,
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
booktitle={Proceedings of the 27th international conference on computational linguistics},
pages={755--763},
year={2018}
}
``` | 6,346 |
tals/albert-base-vitaminc-fever | [
"NOT ENOUGH INFO",
"REFUTES",
"SUPPORTS"
] | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
| 2,357 |
nanopass/distilbert-base-uncased-emotion-2 | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
---
# Distilbert-base-uncased-emotion
## Model description:
[Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model.
[Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.0006792712374590337},
{'label': 'joy', 'score': 0.9959300756454468},
{'label': 'love', 'score': 0.0009452480007894337},
{'label': 'anger', 'score': 0.0018055217806249857},
{'label': 'fear', 'score': 0.00041110432357527316},
{'label': 'surprise', 'score': 0.0002288572577526793}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
## Eval results
```json
{
'test_accuracy': 0.938,
'test_f1': 0.937932884041714,
'test_loss': 0.1472451239824295,
'test_mem_cpu_alloc_delta': 0,
'test_mem_cpu_peaked_delta': 0,
'test_mem_gpu_alloc_delta': 0,
'test_mem_gpu_peaked_delta': 163454464,
'test_runtime': 5.0164,
'test_samples_per_second': 398.69
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) | 2,898 |
DTAI-KULeuven/robbertje-merged-dutch-sentiment | [
"Positive",
"Negative"
] | ---
language: nl
license: mit
datasets:
- dbrd
model-index:
- name: robbertje-merged-dutch-sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: dbrd
type: sentiment-analysis
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9294064748201439
widget:
- text: "Ik erken dat dit een boek is, daarmee is alles gezegd."
- text: "Prachtig verhaal, heel mooi verteld en een verrassend einde... Een topper!"
thumbnail: "https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png"
tags:
- Dutch
- Flemish
- RoBERTa
- RobBERT
---
<p align="center">
<img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch models" width="75%">
</p>
# RobBERTje finetuned for sentiment analysis on DBRD
This is a finetuned model based on [RobBERTje (merged)](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled). We used [DBRD](https://huggingface.co/datasets/dbrd), which consists of book reviews from [hebban.nl](hebban.nl). Hence our example sentences about books. We did some limited experiments to test if this also works for other domains, but this was not exactly amazing.
We released a distilled model and a `base`-sized model. Both models perform quite well, so there is only a slight performance tradeoff:
| Model | Identifier | Layers | #Params. | Accuracy |
|----------------|------------------------------------------------------------------------|--------|-----------|-----------|
| RobBERT (v2) | [`DTAI-KULeuven/robbert-v2-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment) | 12 | 116 M |93.3* |
| RobBERTje - Merged (p=0.5)| [`DTAI-KULeuven/robbertje-merged-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbertje-merged-dutch-sentiment) | 6 | 74 M |92.9 |
*The results of RobBERT are of a different run than the one reported in the paper.
# Training data and setup
We used the [Dutch Book Reviews Dataset (DBRD)](https://huggingface.co/datasets/dbrd) from van der Burgh et al. (2019).
Originally, these reviews got a five-star rating, but this has been converted to positive (⭐️⭐️⭐️⭐️ and ⭐️⭐️⭐️⭐️⭐️), neutral (⭐️⭐️⭐️) and negative (⭐️ and ⭐️⭐️).
We used 19.5k reviews for the training set, 528 reviews for the validation set and 2224 to calculate the final accuracy.
The validation set was used to evaluate a random hyperparameter search over the learning rate, weight decay and gradient accumulation steps.
The full training details are available in [`training_args.bin`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment/blob/main/training_args.bin) as a binary PyTorch file.
# Limitations and biases
- The domain of the reviews is limited to book reviews.
- Most authors of the book reviews were women, which could have caused [a difference in performance for reviews written by men and women](https://www.aclweb.org/anthology/2020.findings-emnlp.292).
## Credits and citation
This project is created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be) and [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/).
If you would like to cite our paper or models, you can use the following BibTeX:
```
@article{Delobelle_Winters_Berendt_2021,
title = {RobBERTje: A Distilled Dutch BERT Model},
author = {Delobelle, Pieter and Winters, Thomas and Berendt, Bettina},
year = 2021,
month = {Dec.},
journal = {Computational Linguistics in the Netherlands Journal},
volume = 11,
pages = {125–140},
url = {https://www.clinjournal.org/clinj/article/view/131}
}
@inproceedings{delobelle2020robbert,
title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel",
author = "Delobelle, Pieter and
Winters, Thomas and
Berendt, Bettina",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292",
doi = "10.18653/v1/2020.findings-emnlp.292",
pages = "3255--3265"
}
``` | 4,457 |
beomi/beep-KcELECTRA-base-hate | [
"hate",
"none",
"offensive"
] | Entry not found | 15 |
sileod/roberta-base-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- multi_nli
metrics:
- accuracy
model-index:
- name: roberta-base-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: multi_nli
type: multi_nli
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8719307182883341
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-mnli
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the multi_nli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4661
- Accuracy: 0.8719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4172 | 1.0 | 24544 | 0.4175 | 0.8508 |
| 0.3324 | 2.0 | 49088 | 0.4146 | 0.8609 |
| 0.2191 | 3.0 | 73632 | 0.4661 | 0.8719 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,733 |
ICFNext/EYY-categorisation-1.0 | [
"arts and culture",
"climate change",
"democratic values",
"digital",
"education",
"employment",
"environmental sustainability",
"european learning mobility",
"health and well-being",
"inclusion",
"n/a",
"policy dialogues",
"renewable energy",
"research and innovation",
"sports",
"studying abroad",
"youth and the world"
] | Entry not found | 15 |
avichr/hebEMO_anticipation | null | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
| 5,444 |
textattack/bert-base-uncased-WNLI | null | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5633802816901409, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 622 |
tr3cks/2LabelsSentimentAnalysisSpanish | null | Entry not found | 15 |
IMSyPP/hate_speech_it | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
widget:
- text: "Ciao, mi chiamo Marcantonio, sono di Roma. Studio informatica all'Università di Roma."
language:
- it
license: mit
---
# Hate Speech Classifier for Social Media Content in Italian Language
A monolingual model for hate speech classification of social media content in Italian language. The model was trained on 119,670 YouTube comments and tested on an independent test set of 21,072 YouTube comments. It is based on Italian ALBERTO pre-trained language model.
## Tokenizer
During training the text was preprocessed using the original Italian ALBERTO tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | 797 |
textattack/roberta-base-rotten-tomatoes | null | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9033771106941839, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 669 |
nbroad/bigbird-base-health-fact | [
"false",
"mixture",
"true",
"unproven"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- health_fact
model-index:
- name: bigbird-base-health-fact
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: health_fact
type: health_fact
split: test
metrics:
- name: F1
type: f1
value: 0.6694031411935434
- name: Accuracy
type: accuracy
value: 0.7948094079480941
- name: False Accuracy
type: accuracy
value: 0.8092783505154639
- name: Mixture Accuracy
type: accuracy
value: 0.4975124378109453
- name: True Accuracy
type: accuracy
value: 0.9148580968280468
- name: Unproven Accuracy
type: accuracy
value: 0.4
---
# bigbird-base-health-fact
This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on the health_fact dataset.
It achieves the following results on the VALIDATION set:
- Overall Accuracy: 0.8228995057660626
- Macro F1: 0.6979224830442152
- False Accuracy: 0.8289473684210527
- Mixture Accuracy: 0.47560975609756095
- True Accuracy: 0.9332273449920508
- Unproven Accuracy: 0.4634146341463415
It achieves the following results on the TEST set:
- Overall Accuracy: 0.7948094079480941
- Macro F1: 0.6694031411935434
- Mixture Accuracy: 0.4975124378109453
- False Accuracy: 0.8092783505154639
- True Accuracy: 0.9148580968280468
- Unproven Accuracy: 0.4
## Model description
Here is how you can use the model:
```python
import torch
from transformers import pipeline
claim = "A mother revealed to her child in a letter after her death that she had just one eye because she had donated the other to him."
text = "In April 2005, we spotted a tearjerker on the Internet about a mother who gave up one of her eyes to a son who had lost one of his at an early age. By February 2007 the item was circulating in e-mail in the following shortened version: My mom only had one eye. I hated her… She was such an embarrassment. She cooked for students and teachers to support the family. There was this one day during elementary school where my mom came to say hello to me. I was so embarrassed. How could she do this to me? I ignored her, threw her a hateful look and ran out. The next day at school one of my classmates said, “EEEE, your mom only has one eye!” I wanted to bury myself. I also wanted my mom to just disappear. I confronted her that day and said, “If you’re only gonna make me a laughing stock, why don’t you just die?” My mom did not respond… I didn’t even stop to think for a second about what I had said, because I was full of anger. I was oblivious to her feelings. I wanted out of that house, and have nothing to do with her. So I studied real hard, got a chance to go abroad to study. Then, I got married. I bought a house of my own. I had kids of my own. I was happy with my life, my kids and the comforts. Then one day, my Mother came to visit me. She hadn’t seen me in years and she didn’t even meet her grandchildren. When she stood by the door, my children laughed at her, and I yelled at her for coming over uninvited. I screamed at her, “How dare you come to my house and scare my children! GET OUT OF HERE! NOW!! !” And to this, my mother quietly answered, “Oh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. One day, a letter regarding a school reunion came to my house. So I lied to my wife that I was going on a business trip. After the reunion, I went to the old shack just out of curiosity. My neighbors said that she died. I did not shed a single tear. They handed me a letter that she had wanted me to have. My dearest son, I think of you all the time. I’m sorry that I came to your house and scared your children. I was so glad when I heard you were coming for the reunion. But I may not be able to even get out of bed to see you. I’m sorry that I was a constant embarrassment to you when you were growing up. You see……..when you were very little, you got into an accident, and lost your eye. As a mother, I couldn’t stand watching you having to grow up with one eye. So I gave you mine. I was so proud of my son who was seeing a whole new world for me, in my place, with that eye. With all my love to you, Your mother. In its earlier incarnation, the story identified by implication its location as Korea through statements made by both the mother and the son (the son’s “I left my mother and came to Seoul” and the mother’s “I won’t visit Seoul anymore”). It also supplied a reason for the son’s behavior when his mother arrived unexpectedly to visit him (“My little girl ran away, scared of my mom’s eye” and “I screamed at her, ‘How dare you come to my house and scare my daughter!'”). A further twist was provided in the original: rather than gaining the news of his mother’s death from neighbors (who hand him her letter), the son instead discovered the woman who bore him lying dead on the floor of what used to be his childhood home, her missive to him clutched in her lifeless hand: Give your parents roses while they are alive, not deadMY mom only had one eye. I hated her … she was such an embarrassment. My mom ran a small shop at a flea market. She collected little weeds and such to sell … anything for the money we needed she was such an embarrassment. There was this one day during elementary school … It was field day, and my mom came. I was so embarrassed. How could she do this to me? I threw her a hateful look and ran out. The next day at school … “your mom only has one eye?!? !” … And they taunted me. I wished that my mom would just disappear from this world so I said to my mom, “mom … Why don’t you have the other eye?! If you’re only going to make me a laughingstock, why don’t you just die?!! !” my mom did not respond … I guess I felt a little bad, but at the same time, it felt good to think that I had said what I’d wanted to say all this time… maybe it was because my mom hadn’t punished me, but I didn’t think that I had hurt her feelings very badly. That night… I woke up, and went to the kitchen to get a glass of water. My mom was crying there, so quietly, as if she was afraid that she might wake me. I took a look at her, and then turned away. Because of the thing I had said to her earlier, there was something pinching at me in the corner of my heart. Even so, I hated my mother who was crying out of her one eye. So I told myself that I would grow up and become successful. Because I hated my one-eyed mom and our desperate poverty… then I studied real hard. I left my mother and came to Seoul and studied, and got accepted in the Seoul University with all the confidence I had. Then, I got married. I bought a house of my own. Then I had kids, too… now I’m living happily as a successful man. I like it here because it’s a place that doesn’t remind me of my mom. This happiness was getting bigger and bigger, when… what?! Who’s this…it was my mother… still with her one eye. It felt as if the whole sky was falling apart on me. My little girl ran away, scared of my mom’s eye. And I asked her, “who are you? !” “I don’t know you!! !” as if trying to make that real. I screamed at her, “How dare you come to my house and scare my daughter!” “GET OUT OF HERE! NOW!! !” and to this, my mother quietly answered, “oh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. Thank goodness… she doesn’t recognize me… I was quite relieved. I told myself that I wasn’t going to care, or think about this for the rest of my life. Then a wave of relief came upon me… One day, a letter regarding a school reunion came to my house. So, lying to my wife that I was going on a business trip, I went. After the reunion, I went down to the old shack, that I used to call a house… just out of curiosity there, I found my mother fallen on the cold ground. But I did not shed a single tear. She had a piece of paper in her hand…. it was a letter to me. My son… I think my life has been long enough now… And… I won’t visit Seoul anymore… but would it be too much to ask if I wanted you to come visit me once in a while? I miss you so much… and I was so glad when I heard you were coming for the reunion. But I decided not to go to the school. …for you… and I’m sorry that I only have one eye, and I was an embarrassment for you. You see, when you were very little, you got into an accident, and lost your eye. as a mom, I couldn’t stand watching you having to grow up with only one eye… so I gave you mine… I was so proud of my son that was seeing a whole new world for me, in my place, with that eye. I was never upset at you for anything you did… the couple times that you were angry with me, I thought to myself, ‘it’s because he loves me…’ my son. Oh, my son… I don’t want you to cry for me, because of my death. My son, I love you my son, I love you so much. With all modern medical technology, transplantation of the eyeball is still impossible. The optic nerve isn’t an ordinary nerve, but instead an inset running from the brain. Modern medicine isn’t able to “connect” an eyeball back to brain after an optic nerve has been severed, let alone transplant the eye from a different person. (The only exception is the cornea, the transparent part in front of the eye: corneas are transplanted to replace injured and opaque ones.) We won’t try to comment on whether any surgeon would accept an eye from a living donor for transplant into another — we’ll leave that to others who are far more knowledgeable about medical ethics and transplant procedures. But we will note that the plot device of a mother’s dramatic sacrifice for the sake of her child’s being revealed in a written communication delivered after her demise appears in another legend about maternal love: the 2008 tale about a woman who left a touching message on her cell phone even as life ebbed from her as she used her body to shield the tot during an earthquake. Giving up one’s own life for a loved one is central to a 2005 urban legend about a boy on a motorcycle who has his girlfriend hug him one last time and put on his helmet just before the crash that kills him and spares her. Returning to the “notes from the dead” theme is the 1995 story about a son who discovers only through a posthumous letter from his mother what their occasional dinner “dates” had meant to her. Another legend we’re familiar with features a meme used in the one-eyed mother story (the coming to light of the enduring love of the person who died for the completely unworthy person she’d lavished it on), but that one involves a terminally ill woman and her cheating husband. In it, an about-to-be-spurned wife begs the adulterous hoon she’d married to stick around for another 30 days and to carry her over the threshold of their home once every day of that month as her way of keeping him around long enough for her to kick the bucket and thus spare their son the knowledge that his parents were on the verge of divorce."
label = "false"
device = 0 if torch.cuda.is_available() else -1
pl = pipeline("text-classification", model="nbroad/bigbird-base-health-fact", device=device)
input_text = claim+pl.tokenizer.sep_token+text
print(len(pl.tokenizer(input_text).input_ids))
# 2303 (which is why bigbird is useful)
pl(input_text)
# [{'label': 'false', 'score': 0.3866822123527527}]
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 18
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro F1 | Macro F1 | False F1 | Mixture F1 | True F1 | Unproven F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:----------:|:-------:|:-----------:|
| 0.5563 | 1.0 | 1226 | 0.5020 | 0.7949 | 0.6062 | 0.7926 | 0.4591 | 0.8986 | 0.2745 |
| 0.5048 | 2.0 | 2452 | 0.4969 | 0.8180 | 0.6846 | 0.8202 | 0.4342 | 0.9126 | 0.5714 |
| 0.3454 | 3.0 | 3678 | 0.5864 | 0.8130 | 0.6874 | 0.8114 | 0.4557 | 0.9154 | 0.5672 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
| 12,627 |
Gadmz/censor-testing-performance | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | Entry not found | 15 |
sismetanin/xlm_roberta_large-ru-sentiment-rusentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## XML-RoBERTa-Large-ru-sentiment-RuSentiment
XML-RoBERTa-Large-ru-sentiment-RuSentiment is a [XML-RoBERTa-Large](https://huggingface.co/xlm-roberta-large) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@inproceedings{rogers2018rusentiment,
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
booktitle={Proceedings of the 27th international conference on computational linguistics},
pages={755--763},
year={2018}
}
``` | 6,350 |
deepset/bert-base-german-cased-sentiment-Germeval17 | [
"negative",
"neutral",
"positive"
] | Entry not found | 15 |
textattack/albert-base-v2-SST-2 | null | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 3e-05, and a maximum sequence length of 64.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9254587155963303, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
| 619 |
IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_100",
"LABEL_101",
"LABEL_102",
"LABEL_103",
"LABEL_104",
"LABEL_105",
"LABEL_106",
"LABEL_107",
"LABEL_108",
"LABEL_109",
"LABEL_11",
"LABEL_110",
"LABEL_111",
"LABEL_112",
"LABEL_113",
"LABEL_114",
"LABEL_115",
"LABEL_116",
"LABEL_117",
"LABEL_118",
"LABEL_119",
"LABEL_12",
"LABEL_120",
"LABEL_121",
"LABEL_122",
"LABEL_123",
"LABEL_124",
"LABEL_125",
"LABEL_126",
"LABEL_127",
"LABEL_128",
"LABEL_129",
"LABEL_13",
"LABEL_130",
"LABEL_131",
"LABEL_132",
"LABEL_133",
"LABEL_134",
"LABEL_135",
"LABEL_136",
"LABEL_137",
"LABEL_138",
"LABEL_139",
"LABEL_14",
"LABEL_140",
"LABEL_141",
"LABEL_142",
"LABEL_143",
"LABEL_144",
"LABEL_145",
"LABEL_146",
"LABEL_147",
"LABEL_148",
"LABEL_149",
"LABEL_15",
"LABEL_150",
"LABEL_151",
"LABEL_152",
"LABEL_153",
"LABEL_154",
"LABEL_155",
"LABEL_156",
"LABEL_157",
"LABEL_158",
"LABEL_159",
"LABEL_16",
"LABEL_160",
"LABEL_161",
"LABEL_162",
"LABEL_163",
"LABEL_164",
"LABEL_165",
"LABEL_166",
"LABEL_167",
"LABEL_168",
"LABEL_169",
"LABEL_17",
"LABEL_170",
"LABEL_171",
"LABEL_172",
"LABEL_173",
"LABEL_174",
"LABEL_175",
"LABEL_176",
"LABEL_177",
"LABEL_178",
"LABEL_179",
"LABEL_18",
"LABEL_180",
"LABEL_181",
"LABEL_182",
"LABEL_183",
"LABEL_184",
"LABEL_185",
"LABEL_186",
"LABEL_187",
"LABEL_188",
"LABEL_189",
"LABEL_19",
"LABEL_190",
"LABEL_191",
"LABEL_192",
"LABEL_193",
"LABEL_194",
"LABEL_195",
"LABEL_196",
"LABEL_197",
"LABEL_198",
"LABEL_199",
"LABEL_2",
"LABEL_20",
"LABEL_200",
"LABEL_201",
"LABEL_202",
"LABEL_203",
"LABEL_204",
"LABEL_205",
"LABEL_206",
"LABEL_207",
"LABEL_208",
"LABEL_209",
"LABEL_21",
"LABEL_210",
"LABEL_211",
"LABEL_212",
"LABEL_213",
"LABEL_214",
"LABEL_215",
"LABEL_216",
"LABEL_217",
"LABEL_218",
"LABEL_219",
"LABEL_22",
"LABEL_220",
"LABEL_221",
"LABEL_222",
"LABEL_223",
"LABEL_224",
"LABEL_225",
"LABEL_226",
"LABEL_227",
"LABEL_228",
"LABEL_229",
"LABEL_23",
"LABEL_230",
"LABEL_231",
"LABEL_232",
"LABEL_233",
"LABEL_234",
"LABEL_235",
"LABEL_236",
"LABEL_237",
"LABEL_238",
"LABEL_239",
"LABEL_24",
"LABEL_240",
"LABEL_241",
"LABEL_242",
"LABEL_243",
"LABEL_244",
"LABEL_245",
"LABEL_246",
"LABEL_247",
"LABEL_248",
"LABEL_249",
"LABEL_25",
"LABEL_250",
"LABEL_251",
"LABEL_252",
"LABEL_253",
"LABEL_254",
"LABEL_255",
"LABEL_256",
"LABEL_257",
"LABEL_258",
"LABEL_259",
"LABEL_26",
"LABEL_260",
"LABEL_261",
"LABEL_262",
"LABEL_263",
"LABEL_264",
"LABEL_265",
"LABEL_266",
"LABEL_267",
"LABEL_268",
"LABEL_269",
"LABEL_27",
"LABEL_270",
"LABEL_271",
"LABEL_272",
"LABEL_273",
"LABEL_274",
"LABEL_275",
"LABEL_276",
"LABEL_277",
"LABEL_278",
"LABEL_279",
"LABEL_28",
"LABEL_280",
"LABEL_281",
"LABEL_282",
"LABEL_283",
"LABEL_284",
"LABEL_285",
"LABEL_286",
"LABEL_287",
"LABEL_288",
"LABEL_289",
"LABEL_29",
"LABEL_290",
"LABEL_291",
"LABEL_292",
"LABEL_293",
"LABEL_294",
"LABEL_295",
"LABEL_296",
"LABEL_297",
"LABEL_298",
"LABEL_299",
"LABEL_3",
"LABEL_30",
"LABEL_300",
"LABEL_301",
"LABEL_302",
"LABEL_303",
"LABEL_304",
"LABEL_305",
"LABEL_306",
"LABEL_307",
"LABEL_308",
"LABEL_309",
"LABEL_31",
"LABEL_310",
"LABEL_311",
"LABEL_312",
"LABEL_313",
"LABEL_314",
"LABEL_315",
"LABEL_316",
"LABEL_317",
"LABEL_318",
"LABEL_319",
"LABEL_32",
"LABEL_320",
"LABEL_321",
"LABEL_322",
"LABEL_323",
"LABEL_324",
"LABEL_325",
"LABEL_326",
"LABEL_327",
"LABEL_328",
"LABEL_329",
"LABEL_33",
"LABEL_330",
"LABEL_331",
"LABEL_332",
"LABEL_333",
"LABEL_334",
"LABEL_335",
"LABEL_336",
"LABEL_337",
"LABEL_338",
"LABEL_339",
"LABEL_34",
"LABEL_340",
"LABEL_341",
"LABEL_342",
"LABEL_343",
"LABEL_344",
"LABEL_345",
"LABEL_346",
"LABEL_347",
"LABEL_348",
"LABEL_349",
"LABEL_35",
"LABEL_350",
"LABEL_351",
"LABEL_352",
"LABEL_353",
"LABEL_354",
"LABEL_355",
"LABEL_356",
"LABEL_357",
"LABEL_358",
"LABEL_359",
"LABEL_36",
"LABEL_360",
"LABEL_361",
"LABEL_362",
"LABEL_363",
"LABEL_364",
"LABEL_365",
"LABEL_366",
"LABEL_367",
"LABEL_368",
"LABEL_369",
"LABEL_37",
"LABEL_370",
"LABEL_371",
"LABEL_372",
"LABEL_373",
"LABEL_374",
"LABEL_375",
"LABEL_376",
"LABEL_377",
"LABEL_378",
"LABEL_379",
"LABEL_38",
"LABEL_380",
"LABEL_381",
"LABEL_382",
"LABEL_383",
"LABEL_384",
"LABEL_385",
"LABEL_386",
"LABEL_387",
"LABEL_388",
"LABEL_389",
"LABEL_39",
"LABEL_390",
"LABEL_391",
"LABEL_392",
"LABEL_393",
"LABEL_394",
"LABEL_395",
"LABEL_396",
"LABEL_397",
"LABEL_398",
"LABEL_399",
"LABEL_4",
"LABEL_40",
"LABEL_400",
"LABEL_401",
"LABEL_402",
"LABEL_403",
"LABEL_404",
"LABEL_405",
"LABEL_406",
"LABEL_407",
"LABEL_408",
"LABEL_409",
"LABEL_41",
"LABEL_410",
"LABEL_411",
"LABEL_412",
"LABEL_413",
"LABEL_414",
"LABEL_415",
"LABEL_416",
"LABEL_417",
"LABEL_418",
"LABEL_419",
"LABEL_42",
"LABEL_420",
"LABEL_421",
"LABEL_422",
"LABEL_423",
"LABEL_424",
"LABEL_425",
"LABEL_426",
"LABEL_427",
"LABEL_428",
"LABEL_429",
"LABEL_43",
"LABEL_430",
"LABEL_431",
"LABEL_432",
"LABEL_433",
"LABEL_434",
"LABEL_435",
"LABEL_436",
"LABEL_437",
"LABEL_438",
"LABEL_439",
"LABEL_44",
"LABEL_440",
"LABEL_441",
"LABEL_442",
"LABEL_443",
"LABEL_444",
"LABEL_445",
"LABEL_446",
"LABEL_447",
"LABEL_448",
"LABEL_449",
"LABEL_45",
"LABEL_450",
"LABEL_451",
"LABEL_452",
"LABEL_453",
"LABEL_454",
"LABEL_455",
"LABEL_456",
"LABEL_457",
"LABEL_458",
"LABEL_459",
"LABEL_46",
"LABEL_460",
"LABEL_461",
"LABEL_462",
"LABEL_463",
"LABEL_464",
"LABEL_465",
"LABEL_466",
"LABEL_467",
"LABEL_468",
"LABEL_469",
"LABEL_47",
"LABEL_470",
"LABEL_471",
"LABEL_472",
"LABEL_473",
"LABEL_474",
"LABEL_475",
"LABEL_476",
"LABEL_477",
"LABEL_478",
"LABEL_479",
"LABEL_48",
"LABEL_480",
"LABEL_481",
"LABEL_482",
"LABEL_483",
"LABEL_484",
"LABEL_485",
"LABEL_486",
"LABEL_487",
"LABEL_488",
"LABEL_489",
"LABEL_49",
"LABEL_490",
"LABEL_491",
"LABEL_492",
"LABEL_493",
"LABEL_494",
"LABEL_495",
"LABEL_496",
"LABEL_497",
"LABEL_498",
"LABEL_499",
"LABEL_5",
"LABEL_50",
"LABEL_500",
"LABEL_501",
"LABEL_502",
"LABEL_503",
"LABEL_504",
"LABEL_505",
"LABEL_506",
"LABEL_507",
"LABEL_508",
"LABEL_509",
"LABEL_51",
"LABEL_510",
"LABEL_511",
"LABEL_512",
"LABEL_513",
"LABEL_514",
"LABEL_515",
"LABEL_516",
"LABEL_517",
"LABEL_518",
"LABEL_519",
"LABEL_52",
"LABEL_520",
"LABEL_521",
"LABEL_522",
"LABEL_523",
"LABEL_524",
"LABEL_525",
"LABEL_526",
"LABEL_527",
"LABEL_528",
"LABEL_529",
"LABEL_53",
"LABEL_530",
"LABEL_531",
"LABEL_532",
"LABEL_533",
"LABEL_534",
"LABEL_535",
"LABEL_536",
"LABEL_537",
"LABEL_538",
"LABEL_539",
"LABEL_54",
"LABEL_540",
"LABEL_541",
"LABEL_542",
"LABEL_543",
"LABEL_544",
"LABEL_545",
"LABEL_546",
"LABEL_547",
"LABEL_548",
"LABEL_549",
"LABEL_55",
"LABEL_550",
"LABEL_551",
"LABEL_552",
"LABEL_553",
"LABEL_554",
"LABEL_555",
"LABEL_556",
"LABEL_557",
"LABEL_558",
"LABEL_559",
"LABEL_56",
"LABEL_560",
"LABEL_561",
"LABEL_562",
"LABEL_563",
"LABEL_564",
"LABEL_565",
"LABEL_566",
"LABEL_567",
"LABEL_568",
"LABEL_569",
"LABEL_57",
"LABEL_570",
"LABEL_571",
"LABEL_572",
"LABEL_573",
"LABEL_574",
"LABEL_575",
"LABEL_576",
"LABEL_577",
"LABEL_578",
"LABEL_579",
"LABEL_58",
"LABEL_580",
"LABEL_581",
"LABEL_582",
"LABEL_583",
"LABEL_584",
"LABEL_585",
"LABEL_586",
"LABEL_587",
"LABEL_588",
"LABEL_589",
"LABEL_59",
"LABEL_590",
"LABEL_591",
"LABEL_592",
"LABEL_593",
"LABEL_594",
"LABEL_595",
"LABEL_596",
"LABEL_597",
"LABEL_598",
"LABEL_599",
"LABEL_6",
"LABEL_60",
"LABEL_600",
"LABEL_601",
"LABEL_602",
"LABEL_603",
"LABEL_604",
"LABEL_605",
"LABEL_606",
"LABEL_607",
"LABEL_608",
"LABEL_609",
"LABEL_61",
"LABEL_610",
"LABEL_611",
"LABEL_612",
"LABEL_613",
"LABEL_614",
"LABEL_615",
"LABEL_616",
"LABEL_617",
"LABEL_618",
"LABEL_619",
"LABEL_62",
"LABEL_620",
"LABEL_621",
"LABEL_622",
"LABEL_623",
"LABEL_624",
"LABEL_625",
"LABEL_626",
"LABEL_627",
"LABEL_628",
"LABEL_629",
"LABEL_63",
"LABEL_630",
"LABEL_631",
"LABEL_632",
"LABEL_633",
"LABEL_634",
"LABEL_635",
"LABEL_636",
"LABEL_637",
"LABEL_638",
"LABEL_639",
"LABEL_64",
"LABEL_640",
"LABEL_641",
"LABEL_642",
"LABEL_643",
"LABEL_644",
"LABEL_645",
"LABEL_646",
"LABEL_647",
"LABEL_648",
"LABEL_649",
"LABEL_65",
"LABEL_650",
"LABEL_651",
"LABEL_652",
"LABEL_653",
"LABEL_654",
"LABEL_655",
"LABEL_656",
"LABEL_657",
"LABEL_658",
"LABEL_659",
"LABEL_66",
"LABEL_660",
"LABEL_661",
"LABEL_662",
"LABEL_663",
"LABEL_664",
"LABEL_665",
"LABEL_666",
"LABEL_667",
"LABEL_668",
"LABEL_669",
"LABEL_67",
"LABEL_670",
"LABEL_671",
"LABEL_672",
"LABEL_673",
"LABEL_674",
"LABEL_675",
"LABEL_676",
"LABEL_677",
"LABEL_678",
"LABEL_679",
"LABEL_68",
"LABEL_680",
"LABEL_681",
"LABEL_682",
"LABEL_683",
"LABEL_684",
"LABEL_685",
"LABEL_686",
"LABEL_687",
"LABEL_688",
"LABEL_689",
"LABEL_69",
"LABEL_690",
"LABEL_691",
"LABEL_692",
"LABEL_693",
"LABEL_694",
"LABEL_695",
"LABEL_696",
"LABEL_697",
"LABEL_698",
"LABEL_699",
"LABEL_7",
"LABEL_70",
"LABEL_700",
"LABEL_701",
"LABEL_702",
"LABEL_703",
"LABEL_704",
"LABEL_705",
"LABEL_706",
"LABEL_707",
"LABEL_708",
"LABEL_709",
"LABEL_71",
"LABEL_710",
"LABEL_711",
"LABEL_712",
"LABEL_713",
"LABEL_714",
"LABEL_715",
"LABEL_716",
"LABEL_717",
"LABEL_718",
"LABEL_719",
"LABEL_72",
"LABEL_720",
"LABEL_721",
"LABEL_722",
"LABEL_723",
"LABEL_724",
"LABEL_725",
"LABEL_726",
"LABEL_727",
"LABEL_728",
"LABEL_729",
"LABEL_73",
"LABEL_730",
"LABEL_731",
"LABEL_732",
"LABEL_733",
"LABEL_734",
"LABEL_735",
"LABEL_736",
"LABEL_737",
"LABEL_738",
"LABEL_739",
"LABEL_74",
"LABEL_740",
"LABEL_741",
"LABEL_742",
"LABEL_743",
"LABEL_744",
"LABEL_745",
"LABEL_746",
"LABEL_747",
"LABEL_748",
"LABEL_749",
"LABEL_75",
"LABEL_750",
"LABEL_751",
"LABEL_752",
"LABEL_753",
"LABEL_754",
"LABEL_755",
"LABEL_756",
"LABEL_757",
"LABEL_758",
"LABEL_759",
"LABEL_76",
"LABEL_760",
"LABEL_761",
"LABEL_762",
"LABEL_763",
"LABEL_764",
"LABEL_765",
"LABEL_766",
"LABEL_767",
"LABEL_77",
"LABEL_78",
"LABEL_79",
"LABEL_8",
"LABEL_80",
"LABEL_81",
"LABEL_82",
"LABEL_83",
"LABEL_84",
"LABEL_85",
"LABEL_86",
"LABEL_87",
"LABEL_88",
"LABEL_89",
"LABEL_9",
"LABEL_90",
"LABEL_91",
"LABEL_92",
"LABEL_93",
"LABEL_94",
"LABEL_95",
"LABEL_96",
"LABEL_97",
"LABEL_98",
"LABEL_99"
] | ---
license: apache-2.0
# inference: false
# pipeline_tag: zero-shot-image-classification
pipeline_tag: feature-extraction
# inference:
# parameters:
tags:
- clip
- zh
- image-text
- feature-extraction
---
# Model Details
This model is a Chinese CLIP model trained on [Noah-Wukong Dataset](https://wukong-dataset.github.io/wukong-dataset/), which contains about 100M Chinese image-text pairs. We use ViT-L-14 from [openAI](https://github.com/openai/CLIP) as image encoder and Chinese pre-trained language model [chinese-roberta-wwm-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) as text encoder. We freeze the image encoder and only finetune the text encoder. The model was trained for 10 epochs and it takes about 5 days with 16 A100 GPUs. **This is a beta version, We will continueously update this model**
# Taiyi (太乙)
Taiyi models are a branch of the Fengshenbang (封神榜) series of models. The models in Taiyi are pre-trained with multimodal pre-training strategies. We will release more image-text model trained on Chinese dataset and benefit the Chinese community.
# Usage
```python3
from PIL import Image
import requests
import clip
import torch
from transformers import BertForSequenceClassification, BertConfig, BertTokenizer
from transformers import CLIPProcessor, CLIPModel
import numpy as np
query_texts = ["一只猫", "一只狗",'两只猫', '两只老虎','一只老虎'] # 这里是输入文本的,可以随意替换。
# 加载Taiyi 中文 text encoder
text_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese")
text_encoder = BertForSequenceClassification.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese").eval()
text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids']
url = "http://images.cocodataset.org/val2017/000000039769.jpg" # 这里可以换成任意图片的url
# 加载CLIP的image encoder
clip_model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
image = processor(images=Image.open(requests.get(url, stream=True).raw), return_tensors="pt")
with torch.no_grad():
image_features = clip_model.get_image_features(**image)
text_features = text_encoder(text).logits
# 归一化
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
# 计算余弦相似度 logit_scale是尺度系数
logit_scale = clip_model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print(np.around(probs, 3))
```
# Evaluation
### Zero-Shot Classification
| model | dataset | Top1 | Top5 |
| ---- | ---- | ---- | ---- |
| Taiyi-CLIP-Roberta-326M-Chinese | ImageNet1k-CN | 51.72% | 78.46% |
### Zero-Shot Text-to-Image Retrieval
| model | dataset | Top1 | Top5 | Top10 |
| ---- | ---- | ---- | ---- | ---- |
| Taiyi-CLIP-Roberta-326M-Chinese | Flickr30k-CNA-test | 51.08 % | 78.20 % | 85.94 % |
| Taiyi-CLIP-Roberta-326M-Chinese | COCO-CN-test | 52.40 % | 80.50 % | 89.60 % |
| Taiyi-CLIP-Roberta-326M-Chinese | wukong50k | 60.16 % | 90.36% | 95.61% |
# Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | 3,484 |
salesken/paraphrase_diversity_ranker | null | ---
tags: salesken
license: apache-2.0
inference: false
---
We have trained a model to evaluate if a paraphrase is a semantic variation to the input query or just a surface level variation. Data augmentation by adding Surface level variations does not add much value to the NLP model training. if the approach to paraphrase generation is "OverGenerate and Rank" , Its important to have a robust model of scoring/ ranking paraphrases. NLG Metrics like bleu ,BleuRT, gleu , Meteor have not proved very effective in scoring paraphrases.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import pandas as pd
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_diversity_ranker")
model = AutoModelForSequenceClassification.from_pretrained("salesken/paraphrase_diversity_ranker")
input_query = ["tough challenges make you stronger."]
paraphrases = [
"tough problems make you stronger",
"tough problems will make you stronger",
"tough challenges make you stronger",
"tough challenges will make you a stronger person",
"tough challenges will make you stronger",
"tough tasks make you stronger",
"the tough task makes you stronger",
"tough stuff makes you stronger",
"if tough times make you stronger",
"the tough part makes you stronger",
"tough issues strengthens you",
"tough shit makes you stronger",
"tough tasks force you to be stronger",
"tough challenge is making you stronger",
"tough problems make you have more strength"]
para_pairs=list(pd.MultiIndex.from_product([input_query, paraphrases]))
features = tokenizer(para_pairs, padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['surface_level_variation', 'semantic_variation']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
sorted_diverse_paraphrases= np.array(para_pairs)[scores[:,1].sort(descending=True).indices].tolist()
print(sorted_diverse_paraphrases)
# to identify the type of paraphrase (surface-level variation or semantic variation)
print("Paraphrase type detection=====", list(zip(para_pairs, labels)))
```
============================================================================
For more robust results, filter out the paraphrases which are not semantically
similar using a model trained on NLI, STS task and then apply the ranker .
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
from transformers import AutoModelForSequenceClassification
from sentence_transformers import SentenceTransformer, util
import torch
import pandas as pd
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_diversity_ranker")
model = AutoModelForSequenceClassification.from_pretrained("salesken/paraphrase_diversity_ranker")
embedder = SentenceTransformer('stsb-bert-large')
input_query = ["tough challenges make you stronger."]
paraphrases = [
"tough problems make you stronger",
"tough problems will make you stronger",
"tough challenges make you stronger",
"tough challenges will make you a stronger person",
"tough challenges will make you stronger",
"tough tasks make you stronger",
"the tough task makes you stronger",
"tough stuff makes you stronger",
"tough people make you stronger",
"if tough times make you stronger",
"the tough part makes you stronger",
"tough issues strengthens you",
"tough shit makes you stronger",
"tough tasks force you to be stronger",
"tough challenge is making you stronger",
"tough problems make you have more strength"]
corpus_embeddings = embedder.encode(paraphrases, convert_to_tensor=True)
query_embedding = embedder.encode(input_query, convert_to_tensor=True)
cos_scores = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0]
para_set=np.array(paraphrases)
a=cos_scores.sort(descending=True)
para= para_set[a.indices[a.values>=0.7].cpu()].tolist()
para_pairs=list(pd.MultiIndex.from_product([input_query, para]))
import torch
features = tokenizer(para_pairs, padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['surface_level_variation', 'semantic_variation']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
sorted_diverse_paraphrases= np.array(para)[scores[:,1].sort(descending=True).indices].tolist()
print("Paraphrases sorted by diversity:=======",sorted_diverse_paraphrases)
# to identify the type of paraphrase (surface-level variation or semantic variation)
print("Paraphrase type detection=====", list(zip(para_pairs, labels)))
``` | 4,896 |
MMG/xlm-roberta-base-sa-spanish | [
"Negative",
"Neutral",
"Positive"
] | Entry not found | 15 |
Mithil/86RecallRoberta | null | ---
license: afl-3.0
---
| 25 |
avichr/hebEMO_fear | null | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
| 5,444 |
SkolkovoInstitute/roberta_toxicity_classifier_v1 | null | This model is a clone of [SkolkovoInstitute/roberta_toxicity_classifier](https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier) trained on a disjoint dataset.
While `roberta_toxicity_classifier` is used for evaluation of detoxification algorithms, `roberta_toxicity_classifier_v1` can be used within these algorithms, as in the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/1911.00536). | 446 |
elozano/bert-base-cased-fake-news | [
"Fake",
"Real"
] | Entry not found | 15 |
emrecan/distilbert-base-turkish-cased-allnli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-turkish-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6481
- Accuracy: 0.7381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.94 | 0.03 | 1000 | 0.9074 | 0.5813 |
| 0.8102 | 0.07 | 2000 | 0.8802 | 0.5949 |
| 0.7737 | 0.1 | 3000 | 0.8491 | 0.6155 |
| 0.7576 | 0.14 | 4000 | 0.8283 | 0.6261 |
| 0.7286 | 0.17 | 5000 | 0.8150 | 0.6362 |
| 0.7162 | 0.2 | 6000 | 0.7998 | 0.6400 |
| 0.7092 | 0.24 | 7000 | 0.7830 | 0.6565 |
| 0.6962 | 0.27 | 8000 | 0.7653 | 0.6629 |
| 0.6876 | 0.31 | 9000 | 0.7630 | 0.6687 |
| 0.6778 | 0.34 | 10000 | 0.7475 | 0.6739 |
| 0.6737 | 0.37 | 11000 | 0.7495 | 0.6781 |
| 0.6712 | 0.41 | 12000 | 0.7350 | 0.6826 |
| 0.6559 | 0.44 | 13000 | 0.7274 | 0.6897 |
| 0.6493 | 0.48 | 14000 | 0.7248 | 0.6902 |
| 0.6483 | 0.51 | 15000 | 0.7263 | 0.6858 |
| 0.6445 | 0.54 | 16000 | 0.7070 | 0.6978 |
| 0.6467 | 0.58 | 17000 | 0.7083 | 0.6981 |
| 0.6332 | 0.61 | 18000 | 0.6996 | 0.7004 |
| 0.6288 | 0.65 | 19000 | 0.6979 | 0.6978 |
| 0.6308 | 0.68 | 20000 | 0.6912 | 0.7040 |
| 0.622 | 0.71 | 21000 | 0.6904 | 0.7092 |
| 0.615 | 0.75 | 22000 | 0.6872 | 0.7094 |
| 0.6186 | 0.78 | 23000 | 0.6877 | 0.7075 |
| 0.6183 | 0.82 | 24000 | 0.6818 | 0.7111 |
| 0.6115 | 0.85 | 25000 | 0.6856 | 0.7122 |
| 0.608 | 0.88 | 26000 | 0.6697 | 0.7179 |
| 0.6071 | 0.92 | 27000 | 0.6727 | 0.7181 |
| 0.601 | 0.95 | 28000 | 0.6798 | 0.7118 |
| 0.6018 | 0.99 | 29000 | 0.6854 | 0.7071 |
| 0.5762 | 1.02 | 30000 | 0.6697 | 0.7214 |
| 0.5507 | 1.05 | 31000 | 0.6710 | 0.7185 |
| 0.5575 | 1.09 | 32000 | 0.6709 | 0.7226 |
| 0.5493 | 1.12 | 33000 | 0.6659 | 0.7191 |
| 0.5464 | 1.15 | 34000 | 0.6709 | 0.7232 |
| 0.5595 | 1.19 | 35000 | 0.6642 | 0.7220 |
| 0.5446 | 1.22 | 36000 | 0.6709 | 0.7202 |
| 0.5524 | 1.26 | 37000 | 0.6751 | 0.7148 |
| 0.5473 | 1.29 | 38000 | 0.6642 | 0.7209 |
| 0.5477 | 1.32 | 39000 | 0.6662 | 0.7223 |
| 0.5522 | 1.36 | 40000 | 0.6586 | 0.7227 |
| 0.5406 | 1.39 | 41000 | 0.6602 | 0.7258 |
| 0.54 | 1.43 | 42000 | 0.6564 | 0.7273 |
| 0.5458 | 1.46 | 43000 | 0.6780 | 0.7213 |
| 0.5448 | 1.49 | 44000 | 0.6561 | 0.7235 |
| 0.5418 | 1.53 | 45000 | 0.6600 | 0.7253 |
| 0.5408 | 1.56 | 46000 | 0.6616 | 0.7274 |
| 0.5451 | 1.6 | 47000 | 0.6557 | 0.7283 |
| 0.5385 | 1.63 | 48000 | 0.6583 | 0.7295 |
| 0.5261 | 1.66 | 49000 | 0.6468 | 0.7325 |
| 0.5364 | 1.7 | 50000 | 0.6447 | 0.7329 |
| 0.5294 | 1.73 | 51000 | 0.6429 | 0.7320 |
| 0.5332 | 1.77 | 52000 | 0.6508 | 0.7272 |
| 0.5274 | 1.8 | 53000 | 0.6492 | 0.7326 |
| 0.5286 | 1.83 | 54000 | 0.6470 | 0.7318 |
| 0.5359 | 1.87 | 55000 | 0.6393 | 0.7354 |
| 0.5366 | 1.9 | 56000 | 0.6445 | 0.7367 |
| 0.5296 | 1.94 | 57000 | 0.6413 | 0.7313 |
| 0.5346 | 1.97 | 58000 | 0.6393 | 0.7315 |
| 0.5264 | 2.0 | 59000 | 0.6448 | 0.7357 |
| 0.4857 | 2.04 | 60000 | 0.6640 | 0.7335 |
| 0.4888 | 2.07 | 61000 | 0.6612 | 0.7318 |
| 0.4964 | 2.11 | 62000 | 0.6516 | 0.7337 |
| 0.493 | 2.14 | 63000 | 0.6503 | 0.7356 |
| 0.4961 | 2.17 | 64000 | 0.6519 | 0.7348 |
| 0.4847 | 2.21 | 65000 | 0.6517 | 0.7327 |
| 0.483 | 2.24 | 66000 | 0.6555 | 0.7310 |
| 0.4857 | 2.28 | 67000 | 0.6525 | 0.7312 |
| 0.484 | 2.31 | 68000 | 0.6444 | 0.7342 |
| 0.4792 | 2.34 | 69000 | 0.6508 | 0.7330 |
| 0.488 | 2.38 | 70000 | 0.6513 | 0.7344 |
| 0.472 | 2.41 | 71000 | 0.6547 | 0.7346 |
| 0.4872 | 2.45 | 72000 | 0.6500 | 0.7342 |
| 0.4782 | 2.48 | 73000 | 0.6585 | 0.7358 |
| 0.481 | 2.51 | 74000 | 0.6477 | 0.7356 |
| 0.4822 | 2.55 | 75000 | 0.6587 | 0.7346 |
| 0.4728 | 2.58 | 76000 | 0.6572 | 0.7340 |
| 0.4841 | 2.62 | 77000 | 0.6443 | 0.7374 |
| 0.4885 | 2.65 | 78000 | 0.6494 | 0.7362 |
| 0.4752 | 2.68 | 79000 | 0.6509 | 0.7382 |
| 0.4883 | 2.72 | 80000 | 0.6457 | 0.7371 |
| 0.4888 | 2.75 | 81000 | 0.6497 | 0.7364 |
| 0.4844 | 2.79 | 82000 | 0.6481 | 0.7376 |
| 0.4833 | 2.82 | 83000 | 0.6451 | 0.7389 |
| 0.48 | 2.85 | 84000 | 0.6423 | 0.7373 |
| 0.4832 | 2.89 | 85000 | 0.6477 | 0.7357 |
| 0.4805 | 2.92 | 86000 | 0.6464 | 0.7379 |
| 0.4775 | 2.96 | 87000 | 0.6477 | 0.7380 |
| 0.4843 | 2.99 | 88000 | 0.6481 | 0.7381 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
| 7,089 |
Ahmedgr/DistilBert_Fine_tune_QuestionVsAnswer | [
"answer",
"question"
] | ---
tags:
- generated_from_trainer
model-index:
- name: DistilBert_Fine_tune_QuestionVsAnswer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBert_Fine_tune_QuestionVsAnswer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
| 943 |
qanastek/XLMRoberta-Alexa-Intents-Classification | [
"audio_volume_other",
"play_music",
"iot_hue_lighton",
"general_greet",
"calendar_set",
"audio_volume_down",
"social_query",
"audio_volume_mute",
"iot_wemo_on",
"iot_hue_lightup",
"audio_volume_up",
"iot_coffee",
"takeaway_query",
"qa_maths",
"play_game",
"cooking_query",
"iot_hue_lightdim",
"iot_wemo_off",
"music_settings",
"weather_query",
"news_query",
"alarm_remove",
"social_post",
"recommendation_events",
"transport_taxi",
"takeaway_order",
"music_query",
"calendar_query",
"lists_query",
"qa_currency",
"recommendation_movies",
"general_joke",
"recommendation_locations",
"email_querycontact",
"lists_remove",
"play_audiobook",
"email_addcontact",
"lists_createoradd",
"play_radio",
"qa_stock",
"alarm_query",
"email_sendemail",
"general_quirky",
"music_likeness",
"cooking_recipe",
"email_query",
"datetime_query",
"transport_traffic",
"play_podcasts",
"iot_hue_lightchange",
"calendar_remove",
"transport_query",
"transport_ticket",
"qa_factoid",
"iot_cleaning",
"alarm_set",
"datetime_convert",
"iot_hue_lightoff",
"qa_definition",
"music_dislikeness"
] | ---
tags:
- Transformers
- text-classification
- intent-classification
- multi-class-classification
- natural-language-understanding
languages:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
multilinguality:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
datasets:
- qanastek/MASSIVE
widget:
- text: "wake me up at five am this week"
- text: "je veux écouter la chanson de jacques brel encore une fois"
- text: "quiero escuchar la canción de arijit singh una vez más"
- text: "olly onde é que á um parque por perto onde eu possa correr"
- text: "פרק הבא בפודקאסט בבקשה"
- text: "亚马逊股价"
- text: "найди билет на поезд в санкт-петербург"
license: cc-by-4.0
---
**People Involved**
* [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
## Demo: How to use in HuggingFace Transformers Pipeline
Requires [transformers](https://pypi.org/project/transformers/): ```pip install transformers```
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
model_name = 'qanastek/XLMRoberta-Alexa-Intents-Classification'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
res = classifier("réveille-moi à neuf heures du matin le vendredi")
print(res)
```
Outputs:
```python
[{'label': 'alarm_set', 'score': 0.9998375177383423}]
```
## Training data
[MASSIVE](https://huggingface.co/datasets/qanastek/MASSIVE) is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
## Intents
* audio_volume_other
* play_music
* iot_hue_lighton
* general_greet
* calendar_set
* audio_volume_down
* social_query
* audio_volume_mute
* iot_wemo_on
* iot_hue_lightup
* audio_volume_up
* iot_coffee
* takeaway_query
* qa_maths
* play_game
* cooking_query
* iot_hue_lightdim
* iot_wemo_off
* music_settings
* weather_query
* news_query
* alarm_remove
* social_post
* recommendation_events
* transport_taxi
* takeaway_order
* music_query
* calendar_query
* lists_query
* qa_currency
* recommendation_movies
* general_joke
* recommendation_locations
* email_querycontact
* lists_remove
* play_audiobook
* email_addcontact
* lists_createoradd
* play_radio
* qa_stock
* alarm_query
* email_sendemail
* general_quirky
* music_likeness
* cooking_recipe
* email_query
* datetime_query
* transport_traffic
* play_podcasts
* iot_hue_lightchange
* calendar_remove
* transport_query
* transport_ticket
* qa_factoid
* iot_cleaning
* alarm_set
* datetime_convert
* iot_hue_lightoff
* qa_definition
* music_dislikeness
## Evaluation results
```plain
precision recall f1-score support
alarm_query 0.9661 0.9037 0.9338 1734
alarm_remove 0.9484 0.9608 0.9545 1071
alarm_set 0.8611 0.9254 0.8921 2091
audio_volume_down 0.8657 0.9537 0.9075 561
audio_volume_mute 0.8608 0.9130 0.8861 1632
audio_volume_other 0.8684 0.5392 0.6653 306
audio_volume_up 0.7198 0.8446 0.7772 663
calendar_query 0.7555 0.8229 0.7878 6426
calendar_remove 0.8688 0.9441 0.9049 3417
calendar_set 0.9092 0.9014 0.9053 10659
cooking_query 0.0000 0.0000 0.0000 0
cooking_recipe 0.9282 0.8592 0.8924 3672
datetime_convert 0.8144 0.7686 0.7909 765
datetime_query 0.9152 0.9305 0.9228 4488
email_addcontact 0.6482 0.8431 0.7330 612
email_query 0.9629 0.9319 0.9472 6069
email_querycontact 0.6853 0.8032 0.7396 1326
email_sendemail 0.9530 0.9381 0.9455 5814
general_greet 0.1026 0.3922 0.1626 51
general_joke 0.9305 0.9123 0.9213 969
general_quirky 0.6984 0.5417 0.6102 8619
iot_cleaning 0.9590 0.9359 0.9473 1326
iot_coffee 0.9304 0.9749 0.9521 1836
iot_hue_lightchange 0.8794 0.9374 0.9075 1836
iot_hue_lightdim 0.8695 0.8711 0.8703 1071
iot_hue_lightoff 0.9440 0.9229 0.9334 2193
iot_hue_lighton 0.4545 0.5882 0.5128 153
iot_hue_lightup 0.9271 0.8315 0.8767 1377
iot_wemo_off 0.9615 0.8715 0.9143 918
iot_wemo_on 0.8455 0.7941 0.8190 510
lists_createoradd 0.8437 0.8356 0.8396 1989
lists_query 0.8918 0.8335 0.8617 2601
lists_remove 0.9536 0.8601 0.9044 2652
music_dislikeness 0.7725 0.7157 0.7430 204
music_likeness 0.8570 0.8159 0.8359 1836
music_query 0.8667 0.8050 0.8347 1785
music_settings 0.4024 0.3301 0.3627 306
news_query 0.8343 0.8657 0.8498 6324
play_audiobook 0.8172 0.8125 0.8149 2091
play_game 0.8666 0.8403 0.8532 1785
play_music 0.8683 0.8845 0.8763 8976
play_podcasts 0.8925 0.9125 0.9024 3213
play_radio 0.8260 0.8935 0.8585 3672
qa_currency 0.9459 0.9578 0.9518 1989
qa_definition 0.8638 0.8552 0.8595 2907
qa_factoid 0.7959 0.8178 0.8067 7191
qa_maths 0.8937 0.9302 0.9116 1275
qa_stock 0.7995 0.9412 0.8646 1326
recommendation_events 0.7646 0.7702 0.7674 2193
recommendation_locations 0.7489 0.8830 0.8104 1581
recommendation_movies 0.6907 0.7706 0.7285 1020
social_post 0.9623 0.9080 0.9344 4131
social_query 0.8104 0.7914 0.8008 1275
takeaway_order 0.7697 0.8458 0.8059 1122
takeaway_query 0.9059 0.8571 0.8808 1785
transport_query 0.8141 0.7559 0.7839 2601
transport_taxi 0.9222 0.9403 0.9312 1173
transport_ticket 0.9259 0.9384 0.9321 1785
transport_traffic 0.6919 0.9660 0.8063 765
weather_query 0.9387 0.9492 0.9439 7956
accuracy 0.8617 151674
macro avg 0.8162 0.8273 0.8178 151674
weighted avg 0.8639 0.8617 0.8613 151674
```
| 7,985 |
MoritzLaurer/MiniLM-L6-mnli-fever-docnli-ling-2c | [
"entailment",
"not_entailment"
] | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "I first thought that I liked the movie, but upon second thought the movie was actually disappointing. [SEP] The movie was good."
---
# MiniLM-L6-mnli-fever-docnli-ling-2c
## Model description
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data.
The base model is MiniLM-L6 from Microsoft. Which is very fast, but a bit less accurate than other models.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/MiniLM-L6-mnli-fever-docnli-ling-2c"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "not_entailment"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
### Training procedure
MiniLM-L6-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c
---------|----------|---------|----------|----------
(to upload)
## Limitations and bias
Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m.laurer{at}vu.nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) | 3,986 |
TransQuest/monotransquest-hter-en_zh-wiki | [
"LABEL_0"
] | ---
language: en-zh
tags:
- Quality Estimation
- monotransquest
- hter
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_zh-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
## Table of Contents
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 5,426 |
amanbawa96/legal-bert-based-uncase | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",
"LABEL_3",
"LABEL_30",
"LABEL_31",
"LABEL_32",
"LABEL_33",
"LABEL_34",
"LABEL_35",
"LABEL_36",
"LABEL_37",
"LABEL_38",
"LABEL_39",
"LABEL_4",
"LABEL_40",
"LABEL_41",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
has-abi/distilBERT-finetuned-resumes-sections | [
"awards",
"certificates",
"contact/name/title",
"education",
"interests",
"languages",
"para",
"professional_experiences",
"projects",
"skills",
"soft_skills",
"summary"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: distilBERT-finetuned-resumes-sections
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT-finetuned-resumes-sections
This model is a fine-tuned version of [Geotrend/distilbert-base-en-fr-cased](https://huggingface.co/Geotrend/distilbert-base-en-fr-cased) on a private resume sections dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0487
- F1: 0.9512
- Roc Auc: 0.9729
- Accuracy: 0.9482
## Model description
This model classifies a resume section into 12 classes.
### Possible classes for a resume section
**awards**, **certificates**, **contact/name/title**, **education**, **interests**, **languages**, **para**, **professional_experiences**, **projects**, **skills**, **soft_skills**, **summary**.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.058 | 1.0 | 1083 | 0.0457 | 0.9186 | 0.9494 | 0.9020 |
| 0.0277 | 2.0 | 2166 | 0.0393 | 0.9327 | 0.9614 | 0.9251 |
| 0.0154 | 3.0 | 3249 | 0.0333 | 0.9425 | 0.9671 | 0.9367 |
| 0.0104 | 4.0 | 4332 | 0.0408 | 0.9357 | 0.9645 | 0.9293 |
| 0.0084 | 5.0 | 5415 | 0.0405 | 0.9376 | 0.9643 | 0.9298 |
| 0.0065 | 6.0 | 6498 | 0.0419 | 0.9439 | 0.9699 | 0.9385 |
| 0.0051 | 7.0 | 7581 | 0.0450 | 0.9412 | 0.9674 | 0.9376 |
| 0.0034 | 8.0 | 8664 | 0.0406 | 0.9433 | 0.9684 | 0.9372 |
| 0.0035 | 9.0 | 9747 | 0.0441 | 0.9403 | 0.9664 | 0.9358 |
| 0.0024 | 10.0 | 10830 | 0.0492 | 0.9419 | 0.9678 | 0.9367 |
| 0.0026 | 11.0 | 11913 | 0.0470 | 0.9468 | 0.9708 | 0.9436 |
| 0.0022 | 12.0 | 12996 | 0.0514 | 0.9424 | 0.9679 | 0.9395 |
| 0.0013 | 13.0 | 14079 | 0.0458 | 0.9478 | 0.9715 | 0.9441 |
| 0.0019 | 14.0 | 15162 | 0.0494 | 0.9477 | 0.9711 | 0.9450 |
| 0.0007 | 15.0 | 16245 | 0.0492 | 0.9496 | 0.9719 | 0.9464 |
| 0.0009 | 16.0 | 17328 | 0.0487 | 0.9512 | 0.9729 | 0.9482 |
| 0.001 | 17.0 | 18411 | 0.0510 | 0.9480 | 0.9711 | 0.9441 |
| 0.0006 | 18.0 | 19494 | 0.0532 | 0.9477 | 0.9709 | 0.9441 |
| 0.0007 | 19.0 | 20577 | 0.0511 | 0.9487 | 0.9720 | 0.9445 |
| 0.0005 | 20.0 | 21660 | 0.0522 | 0.9471 | 0.9710 | 0.9436 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 3,207 |
avichr/hebEMO_anger | null | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
| 5,444 |
avichr/hebEMO_disgust | null | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
| 5,444 |
tupleblog/salim-classifier | null | ---
widget:
- text: "รัฐรับผิดชอบทุกชีวิตไม่ได้หรอกคนให้บริการต้องจัดการเองถ้าจะเปิดผับบาร์"
---

# Salim-Classifier
**วัตถุประสงค์:** ทุกวันนี้หาเพื่อนที่รักชาติ ศาสนา พระมหากษัตริย์ รัฐบาลยากเหลือเกิน มีแต่พวกสามกีบ ควายแดงคอยจ้องจะทำร้าย
ทางทีมของเราจึงสร้างโมเดลมาเพื่อช่วยหาเพื่อนสลิ่มจากคอมเม้น ที่นับวันจะหลงเหลืออยู่น้อยยิ่งนักในสังคมไทย ทั้งนี้เพื่อเป็นแนวทางในการสร้างสังคมสลิ่มที่แข็งแรงต่อไป
## วิธีการใช้งาน
สามารถลง `transfomers` จาก Huggingface และใช้งานโมเดลได้ดังต่อไปนี้
``` py
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification,
pipeline
)
# download model from hub
tokenizer = AutoTokenizer.from_pretrained("tupleblog/salim-classifier")
model = AutoModelForSequenceClassification.from_pretrained("tupleblog/salim-classifier")
# using pipeline to classify an input text
classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
text = "จิตไม่ปกติ วันๆคอยแต่ให้คนเสี้ยมทะเลาะกันด่ากัน คอยจ้องแต่จะเล่นงานรัฐบาล ความคดด้านลบ"
classifier(text)
# >> [{'label': 'HIGHLY LIKELY SALIM', 'score': 0.9989368915557861}] ยินดีด้วย น่าจะเป็นสลิ่ม!
```
## การเก็บข้อมูล
สร้างข้อมูลตัวอย่างและทำการ Annotate จากนั้นนำข้อมูลมาเทรนโมเดลด้วย WangchanBERTa
โดยข้อมูลอาจมีความ bias เนื่องจากทางทีมงานเป็นผู้เก็บข้อมูลเอง
## ทดลองใช้งานผ่าน HuggingFace
ท่านสามารถทดลองใช้งานผ่าน HuggingFace โดยใส่คอมเม้นจาก Facebook เข้าไปในช่องได้ในเว็บไซต์
[huggingface.co/tupleblog/salim-classifier](https://huggingface.co/tupleblog/salim-classifier)
**ตัวอย่างประโยค**
- รัฐรับผิดชอบทุกชีวิตไม่ได้หรอกคนให้บริการต้องจัดการเองถ้าจะเปิดผับบาร์
- แค่เคารพกฎหมาย คนพวกนี้ยังทำไม่ได้เลย แล้วจะถามหาความก้าวหน้าของประเทศ ?
- หมามันยังยืนเคารพธงชาติ แต่พวกนี้กลับทำอะไรไม่อายเดรัจฉาน
- ถ้าไม่ชอบประชาธิปไตย จะไปใช้วิธีการปกครองแบบไหนหรอครับ แล้วแบบไหนถึงดีหรอ ผมไม่เข้าใจครับอดีตผ่านไปแล้ว ทำไมไม่มองที่อนาคตกันหละครับ
- อีพวกสามกีบ`<pad>`
สำหรับข้อความที่สั้นกว่า 50 ตัวอักษรแนะนำให้เติม `<pad>` ตามหลังข้อความเพื่อความแม่นยำที่สูงขึ้น
## Performance
We report performance on 20% evaluation set (accuracy, precision, recall, F1-score macro) as follows:
| Accuracy | Precision | Recall | F1 |
| -------- | --------- | ------ | ------ |
| 86.15% | 86.12% | 86.13% | 86.13% |
| 2,347 |
ydshieh/tiny-random-gptj-for-sequence-classification | null | Entry not found | 15 |
MKaan/multilingual-cpv-sector-classifier | [
"Administration, defence and social security services. 👮♀️",
"Agricultural machinery. 🚜",
"Agricultural, farming, fishing, forestry and related products. 🌾",
"Agricultural, forestry, horticultural, aquacultural and apicultural services. 👨🏿🌾",
"Architectural, construction, engineering and inspection services. 👷♂️",
"Business services: law, marketing, consulting, recruitment, printing and security. 👩💼",
"Chemical products. 🧪",
"Clothing, footwear, luggage articles and accessories. 👖",
"Collected and purified water. 🌊",
"Construction structures and materials; auxiliary products to construction (excepts electric apparatus). 🧱",
"Construction work. 🏗️",
"Education and training services. 👩🏿🏫",
"Electrical machinery, apparatus, equipment and consumables; Lighting. ⚡",
"Financial and insurance services. 👨💼",
"Food, beverages, tobacco and related products. 🍽️",
"Furniture (incl. office furniture), furnishings, domestic appliances (excl. lighting) and cleaning products. 🗄️",
"Health and social work services. 👨🏽⚕️",
"Hotel, restaurant and retail trade services. 🏨",
"IT services: consulting, software development, Internet and support. 🖥️",
"Industrial machinery. 🏭",
"Installation services (except software). 🛠️",
"Laboratory, optical and precision equipments (excl. glasses). 🔬",
"Leather and textile fabrics, plastic and rubber materials. 🧵",
"Machinery for mining, quarrying, construction equipment. ⛏️",
"Medical equipments, pharmaceuticals and personal care products. 💉",
"Mining, basic metals and related products. ⚙️",
"Musical instruments, sport goods, games, toys, handicraft, art materials and accessories. 🎸",
"Office and computing machinery, equipment and supplies except furniture and software packages. 🖨️",
"Other community, social and personal services. 🧑🏽🤝🧑🏽",
"Petroleum products, fuel, electricity and other sources of energy. 🔋",
"Postal and telecommunications services. 📶",
"Printed matter and related products. 📰",
"Public utilities. ⛲",
"Radio, television, communication, telecommunication and related equipment. 📡",
"Real estate services. 🏠",
"Recreational, cultural and sporting services. 🚴",
"Repair and maintenance services. 🔧",
"Research and development services and related consultancy services. 👩🔬",
"Security, fire-fighting, police and defence equipment. 🧯",
"Services related to the oil and gas industry. ⛽",
"Sewage-, refuse-, cleaning-, and environmental services. 🧹",
"Software package and information systems. 🔣",
"Supporting and auxiliary transport services; travel agencies services. 🚃",
"Transport equipment and auxiliary products to transportation. 🚌",
"Transport services (excl. Waste transport). 💺"
] | ---
license: apache-2.0
tags:
- eu
- public procurement
- cpv
- sector
- multilingual
- transformers
- text-classification
widget:
- text: "Oppegård municipality, hereafter called the contracting authority, intends to enter into a framework agreement with one supplier for the procurement of fresh bread and bakery products for Oppegård municipality. The contract is estimated to NOK 1 400 000 per annum excluding VAT The total for the entire period including options is NOK 5 600 000 excluding VAT"
---
# multilingual-cpv-sector-classifier
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on [the Tenders Economic Daily Public Procurement Data](https://simap.ted.europa.eu/en).
It achieves the following results on the evaluation set:
- F1 Score: 0.686
## Model description
The model takes procurement descriptions written in any of [104 languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) and classifies them into 45 sector classes represented by [CPV(Common Procurement Vocabulary)](https://simap.ted.europa.eu/en_GB/web/simap/cpv) code descriptions as listed below.
| Common Procurement Vocabulary |
|:-----------------------------|
| Administration, defence and social security services. 👮♀️ |
| Agricultural machinery. 🚜 |
| Agricultural, farming, fishing, forestry and related products. 🌾 |
| Agricultural, forestry, horticultural, aquacultural and apicultural services. 👨🏿🌾 |
| Architectural, construction, engineering and inspection services. 👷♂️ |
| Business services: law, marketing, consulting, recruitment, printing and security. 👩💼 |
| Chemical products. 🧪 |
| Clothing, footwear, luggage articles and accessories. 👖 |
| Collected and purified water. 🌊 |
| Construction structures and materials; auxiliary products to construction (excepts electric apparatus). 🧱 |
| Construction work. 🏗️ |
| Education and training services. 👩🏿🏫 |
| Electrical machinery, apparatus, equipment and consumables; Lighting. ⚡ |
| Financial and insurance services. 👨💼 |
| Food, beverages, tobacco and related products. 🍽️ |
| Furniture (incl. office furniture), furnishings, domestic appliances (excl. lighting) and cleaning products. 🗄️ |
| Health and social work services. 👨🏽⚕️ |
| Hotel, restaurant and retail trade services. 🏨 |
| IT services: consulting, software development, Internet and support. 🖥️ |
| Industrial machinery. 🏭 |
| Installation services (except software). 🛠️ |
| Laboratory, optical and precision equipments (excl. glasses). 🔬 |
| Leather and textile fabrics, plastic and rubber materials. 🧵 |
| Machinery for mining, quarrying, construction equipment. ⛏️ |
| Medical equipments, pharmaceuticals and personal care products. 💉 |
| Mining, basic metals and related products. ⚙️ |
| Musical instruments, sport goods, games, toys, handicraft, art materials and accessories. 🎸 |
| Office and computing machinery, equipment and supplies except furniture and software packages. 🖨️ |
| Other community, social and personal services. 🧑🏽🤝🧑🏽 |
| Petroleum products, fuel, electricity and other sources of energy. 🔋 |
| Postal and telecommunications services. 📶 |
| Printed matter and related products. 📰 |
| Public utilities. ⛲ |
| Radio, television, communication, telecommunication and related equipment. 📡 |
| Real estate services. 🏠 |
| Recreational, cultural and sporting services. 🚴 |
| Repair and maintenance services. 🔧 |
| Research and development services and related consultancy services. 👩🔬 |
| Security, fire-fighting, police and defence equipment. 🧯 |
| Services related to the oil and gas industry. ⛽ |
| Sewage-, refuse-, cleaning-, and environmental services. 🧹 |
| Software package and information systems. 🔣 |
| Supporting and auxiliary transport services; travel agencies services. 🚃 |
| Transport equipment and auxiliary products to transportation. 🚌 |
| Transport services (excl. Waste transport). 💺
## Intended uses & limitations
- Input description should be written in any of [the 104 languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) that MBERT supports.
- The model is just evaluated in 22 languages. Thus there is no information about the performances in other languages.
- The domain is also restricted by the awarded procurement notice descriptions in European Union. Evaluating on whole document texts might change the performance.
## Training and evaluation data
- The whole data consists of 744,360 rows. Shuffled and split into train and validation sets by using 80%/20% manner.
- Each description represents a unique contract notice description awarded between 2011 and 2018.
- Both training and validation data have contract notice descriptions written in 22 European Languages. (Malta and Irish are extracted due to scarcity compared to whole data)
## Training procedure
The training procedure has been completed on Google Cloud V3-8 TPUs. Thanks [Google](https://sites.research.google/trc/about/) for giving the access to Cloud TPUs
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- num_epochs: 3
- gradient_accumulation_steps: 8
- batch_size_per_device: 4
- total_train_batch_size: 32
### Training results
| Epoch | Step | F1 Score|
|:-----:|:------:|:------:|
| 1 | 18,609 | 0.630 |
| 2 | 37,218 | 0.674 |
| 3 | 55,827 | 0.686 |
| Language| F1 Score| Test Size|
|:-----:|:-----:|:-----:|
| PL| 0.759| 13950|
| RO| 0.736| 3522|
| SK| 0.719| 1122|
| LT| 0.687| 2424|
| HU| 0.681| 1879|
| BG| 0.675| 2459|
| CS| 0.668| 2694|
| LV| 0.664| 836|
| DE| 0.645| 35354|
| FI| 0.644| 1898|
| ES| 0.643| 7483|
| PT| 0.631| 874|
| EN| 0.631| 16615|
| HR| 0.626| 865|
| IT| 0.626| 8035|
| NL| 0.624| 5640|
| EL| 0.623| 1724|
| SL| 0.615| 482|
| SV| 0.607| 3326|
| DA| 0.603| 1925|
| FR| 0.601| 33113|
| ET| 0.572| 458|| | 5,964 |
NDugar/debertav3-mnli-snli-anli | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` | 4,788 |
Hate-speech-CNERG/dehatebert-mono-german | [
"NON_HATE",
"HATE"
] | ---
language: de
license: apache-2.0
---
This model is used detecting **hatespeech** in **German language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.649794 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| 1,058 |
HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi | [
"angry",
"delighted",
"furious",
"happy",
"neutral"
] | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
**Binary:**
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
**Multi**
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
| Label | # |
|:---------:|:----:|
| Furious | 236 |
| Angry | 1357 |
| Neutral | 2874 |
| Happy | 2848 |
| Delighted | 2516 |
**Download**
You can download the dataset from:
- [SentiPers](https://github.com/phosseini/sentipers)
- [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| SentiPers (Multi Class) | 71.31* | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 92.42* | 92.13 | - | 91.98 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | 3,268 |
Narrativaai/deberta-v3-small-finetuned-hate_speech18 | [
"NO_HATE",
"HATE",
"IDK",
"RELATION"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- hate_speech18
widget:
- text: "ok, so do we need to kill them too or are the slavs okay ? for some reason whenever i hear the word slav , the word slobber comes to mind and i picture a slobbering half breed creature like the humpback of notre dame or Igor haha"
metrics:
- accuracy
model-index:
- name: deberta-v3-small-hate-speech
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: hate_speech18
type: hate_speech18
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.916058394160584
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa v3 small fine-tuned on hate_speech18 dataset for Hate Speech Detection
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the hate_speech18 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2922
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4147 | 1.0 | 650 | 0.3910 | 0.8832 |
| 0.2975 | 2.0 | 1300 | 0.2922 | 0.9161 |
| 0.2575 | 3.0 | 1950 | 0.3555 | 0.9051 |
| 0.1553 | 4.0 | 2600 | 0.4263 | 0.9124 |
| 0.1267 | 5.0 | 3250 | 0.4238 | 0.9161 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 2,183 |
ml4pubmed/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext_pub_section | [
"BACKGROUND",
"CONCLUSIONS",
"METHODS",
"OBJECTIVE",
"RESULTS"
] | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
tags:
- text-classification
- document sections
- sentence classification
- document classification
- medical
- health
- biomedical
pipeline_tag: text-classification
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext_pub_section
- original model file name: textclassifer_BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext_pubmed_20k
- This is a fine-tuned checkpoint of `microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## usage in python
install transformers as needed: `pip install -U transformers`
run the following, changing the example text to your use case:
```
from transformers import pipeline
model_tag = "ml4pubmed/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext_pub_section"
classifier = pipeline(
'text-classification',
model=model_tag,
)
prompt = """
Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
"""
classifier(
prompt,
) # classify the sentence
```
## metadata
### training_metrics
- val_accuracy: 0.8678670525550842
- val_matthewscorrcoef: 0.8222037553787231
- val_f1score: 0.866841197013855
- val_cross_entropy: 0.3674609065055847
- epoch: 8.0
- train_accuracy_step: 0.83984375
- train_matthewscorrcoef_step: 0.7790813446044922
- train_f1score_step: 0.837363600730896
- train_cross_entropy_step: 0.39843088388442993
- train_accuracy_epoch: 0.8538406491279602
- train_matthewscorrcoef_epoch: 0.8031334280967712
- train_f1score_epoch: 0.8521654605865479
- train_cross_entropy_epoch: 0.4116102457046509
- test_accuracy: 0.8578397035598755
- test_matthewscorrcoef: 0.8091378808021545
- test_f1score: 0.8566917181015015
- test_cross_entropy: 0.3963385224342346
- date_run: Apr-22-2022_t-19
- huggingface_tag: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
| 3,264 |
NikolajMunch/danish-emotion-classification | [
"Afsky",
"Frygt",
"Glæde",
"Overraskelse",
"Tristhed",
"Vrede"
] | ---
widget:
- text: "Hold da op! Kan det virkelig passe?"
language:
- "da"
tags:
- sentiment
- emotion
- danish
---
# **-- EMODa --**
## BERT-model for danish multi-class classification of emotions
Classifies a danish sentence into one of 6 different emotions:
| Danish emotion | Ekman's emotion |
| ----- | ----- |
| 😞 **Afsky** | Disgust |
| 😨 **Frygt** | Fear |
| 😄 **Glæde** | Joy |
| 😱 **Overraskelse** | Surprise |
| 😢 **Tristhed** | Sadness |
| 😠 **Vrede** | Anger |
# How to use
```python
from transformers import pipeline
model_path = "NikolajMunch/danish-emotion-classification"
classifier = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
prediction = classifier("Jeg er godt nok ked af at mine SMS'er er slettet")
print(prediction)
# [{'label': 'Tristhed', 'score': 0.9725030660629272}]
```
or
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("NikolajMunch/danish-emotion-classification")
model = AutoModelForSequenceClassification.from_pretrained("NikolajMunch/danish-emotion-classification")
```
| 1,207 |
elozano/tweet_emotion_eval | [
"Anger",
"Joy",
"Optimism",
"Sadness"
] | ---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "Stop sharing which songs did you listen to during this year on Spotify, NOBODY CARES"
example_title: "Anger"
- text: "I love that joke HAHAHAHAHA"
example_title: "Joy"
- text: "Despite I've not studied a lot for this exam, I think I will pass 😜"
example_title: "Optimism"
- text: "My dog died this morning..."
example_title: "Sadness"
---
| 433 |
elozano/tweet_sentiment_eval | [
"Negative",
"Neutral",
"Positive"
] | ---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "I love summer!"
example_title: "Positive"
- text: "Does anyone want to play?"
example_title: "Neutral"
- text: "This movie is just awful! 😫"
example_title: "Negative"
---
| 260 |
Rebreak/bert_news_class | null | ---
license: mit
---
Classifier of news affecting the stock price in the next 10 minutes | 88 |
Monsia/camembert-fr-covid-tweet-sentiment-classification | [
"negatif",
"neutre",
"positif"
] | ---
language:
- fr
tags:
- classification
license: apache-2.0
metrics:
- accuracy
widget:
- text: "tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les 'ont dit'..."
---
# camembert-fr-covid-tweet-sentiment-classification
This model is a fine-tune checkpoint of [Yanzhu/bertweetfr-base](https://huggingface.co/Yanzhu/bertweetfr-base), fine-tuned on SST-2.
This model reaches an accuracy of 71% on the dev set.
In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes:
- 0 : negatif
- 1 : neutre
- 2 : positif
# Pipelining the Model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("Monsia/camembert-fr-covid-tweet-sentiment-classification")
model = AutoModelForSequenceClassification.from_pretrained("Monsia/camembert-fr-covid-tweet-sentiment-classification")
nlp_topic_classif = transformers.pipeline('topics-classification', model = model, tokenizer = tokenizer)
nlp_topic_classif("tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les '' ont dit ''...")
# Output: [{'label': 'opinions', 'score': 0.831]
``` | 1,295 |
Theivaprakasham/bert-base-cased-twitter_sentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-cased-twitter_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-twitter_sentiment
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6907
- Accuracy: 0.7132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8901 | 1.0 | 1387 | 0.8592 | 0.6249 |
| 0.8085 | 2.0 | 2774 | 0.7600 | 0.6822 |
| 0.7336 | 3.0 | 4161 | 0.7170 | 0.6915 |
| 0.6938 | 4.0 | 5548 | 0.7018 | 0.7016 |
| 0.6738 | 5.0 | 6935 | 0.6926 | 0.7067 |
| 0.6496 | 6.0 | 8322 | 0.6910 | 0.7088 |
| 0.6599 | 7.0 | 9709 | 0.6902 | 0.7088 |
| 0.631 | 8.0 | 11096 | 0.6910 | 0.7095 |
| 0.6327 | 9.0 | 12483 | 0.6925 | 0.7146 |
| 0.6305 | 10.0 | 13870 | 0.6907 | 0.7132 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,926 |
ghanashyamvtatti/roberta-fake-news | null | A fake news detector using RoBERTa.
Dataset: https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset
Training involved using hyperparameter search with 10 trials. | 174 |
cardiffnlp/tweet-topic-19-multi | [
"arts_&_culture",
"business_&_entrepreneurs",
"celebrity_&_pop_culture",
"diaries_&_daily_life",
"family",
"fashion_&_style",
"film_tv_&_video",
"fitness_&_health",
"food_&_dining",
"gaming",
"learning_&_educational",
"music",
"news_&_social_concern",
"other_hobbies",
"relationships",
"science_&_technology",
"sports",
"travel_&_adventure",
"youth_&_student_life"
] | # tweet-topic-19-multi
This is a roBERTa-base model trained on ~90m tweets until the end of 2019 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m)), and finetuned for multi-label topic classification on a corpus of 11,267 tweets.
The original roBERTa-base model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
| <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> |
|-----------------------------|---------------------|----------------------------|--------------------------|
| 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports |
| 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure |
| 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life |
| 4: family | 9: gaming | 14: relationships | |
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import expit
MODEL = f"cardiffnlp/tweet-topic-19-multi"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
class_mapping = model.config.id2label
text = "It is great to see athletes promoting awareness for climate change."
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens)
scores = output[0][0].detach().numpy()
scores = expit(scores)
predictions = (scores >= 0.5) * 1
# TF
#tf_model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
#class_mapping = model.config.id2label
#text = "It is great to see athletes promoting awareness for climate change."
#tokens = tokenizer(text, return_tensors='tf')
#output = tf_model(**tokens)
#scores = output[0][0]
#scores = expit(scores)
#predictions = (scores >= 0.5) * 1
# Map to classes
for i in range(len(predictions)):
if predictions[i]:
print(class_mapping[i])
```
Output:
```
news_&_social_concern
sports
``` | 2,691 |
avichr/hebEMO_trust | null | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
| 5,443 |
raruidol/ArgumentRelation | [
"LABEL_0",
"LABEL_1"
] | # Argument Relation Mining
Best performing model trained in the "Transformer-Based Models for Automatic Detection of Argument Relations: A Cross-Domain Evaluation" paper.
Code available in https://github.com/raruidol/ArgumentRelationMining
Cite:
```
@article{ruiz2021transformer,
title={Transformer-based models for automatic identification of argument relations: A cross-domain evaluation},
author={Ruiz-Dolz, Ramon and Alemany, Jose and Heras, Stella and Garcia-Fornes, Ana},
journal={IEEE Intelligent Systems},
year={2021},
publisher={IEEE}
}
```
| 581 |
DeepPavlov/xlm-roberta-large-en-ru-mnli | [
"CONTRADICTION",
"ENTAILMENT",
"NEUTRAL"
] | ---
language:
- en
- ru
datasets:
- glue
- mnli
model_index:
- name: mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
tags:
- xlm-roberta
- xlm-roberta-large
- xlm-roberta-large-en-ru
- xlm-roberta-large-en-ru-mnli
widget:
- text: "Люблю тебя. Ненавижу тебя"
- text: "I love you. I hate you"
---
# XLM-RoBERTa-Large-En-Ru-MNLI
xlm-roberta-large-en-ru finetuned on mnli. | 489 |
dminiotas05/distilbert-base-uncased-finetuned-ft650_10class | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ft650_10class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft650_10class
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9674
- Accuracy: 0.2207
- F1: 0.2002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.1088 | 1.0 | 188 | 2.0460 | 0.1807 | 0.1324 |
| 1.9628 | 2.0 | 376 | 1.9867 | 0.2173 | 0.1821 |
| 1.8966 | 3.0 | 564 | 1.9693 | 0.2193 | 0.1936 |
| 1.8399 | 4.0 | 752 | 1.9674 | 0.2207 | 0.2002 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,658 |
Cameron/BERT-SBIC-targetcategory | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7"
] | Entry not found | 15 |
ethanyt/guwen-cls | [
"易藏",
"医藏",
"艺藏",
"史藏",
"佛藏",
"集藏",
"诗藏",
"子藏",
"儒藏",
"道藏"
] | ---
language:
- "zh"
thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "pytorch"
- "text classificatio"
license: "apache-2.0"
pipeline_tag: "text-classification"
widget:
- text: "子曰:“弟子入则孝,出则悌,谨而信,泛爱众,而亲仁。行有馀力,则以学文。”"
---
# Guwen CLS
A Classical Chinese Text Classifier.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a> | 1,239 |
martin-ha/toxic-comment-model | [
"non-toxic",
"toxic"
] | ---
language: en
---
## Model description
This model is a fine-tuned version of the [DistilBERT model](https://huggingface.co/transformers/model_doc/distilbert.html) to classify toxic comments.
## How to use
You can use the model with the following code.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline
model_path = "martin-ha/toxic-comment-model"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(pipeline('This is a test text.'))
```
## Limitations and Bias
This model is intended to use for classify toxic online classifications. However, one limitation of the model is that it performs poorly for some comments that mention a specific identity subgroup, like Muslim. The following table shows a evaluation score for different identity group. You can learn the specific meaning of this metrics [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview/evaluation). But basically, those metrics shows how well a model performs for a specific group. The larger the number, the better.
| **subgroup** | **subgroup_size** | **subgroup_auc** | **bpsn_auc** | **bnsp_auc** |
| ----------------------------- | ----------------- | ---------------- | ------------ | ------------ |
| muslim | 108 | 0.689 | 0.811 | 0.88 |
| jewish | 40 | 0.749 | 0.86 | 0.825 |
| homosexual_gay_or_lesbian | 56 | 0.795 | 0.706 | 0.972 |
| black | 84 | 0.866 | 0.758 | 0.975 |
| white | 112 | 0.876 | 0.784 | 0.97 |
| female | 306 | 0.898 | 0.887 | 0.948 |
| christian | 231 | 0.904 | 0.917 | 0.93 |
| male | 225 | 0.922 | 0.862 | 0.967 |
| psychiatric_or_mental_illness | 26 | 0.924 | 0.907 | 0.95 |
The table above shows that the model performs poorly for the muslim and jewish group. In fact, you pass the sentence "Muslims are people who follow or practice Islam, an Abrahamic monotheistic religion." Into the model, the model will classify it as toxic. Be mindful for this type of potential bias.
## Training data
The training data comes this [Kaggle competition](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data). We use 10% of the `train.csv` data to train the model.
## Training procedure
You can see [this documentation and codes](https://github.com/MSIA/wenyang_pan_nlp_project_2021) for how we train the model. It takes about 3 hours in a P-100 GPU.
## Evaluation results
The model achieves 94% accuracy and 0.59 f1-score in a 10000 rows held-out test set. | 3,184 |
gargam/roberta-base-crest | null | Entry not found | 15 |
bhadresh-savani/electra-base-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
model-index:
- name: bhadresh-savani/electra-base-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
verified: true
- name: Precision Macro
type: precision
value: 0.911532655431019
verified: true
- name: Precision Micro
type: precision
value: 0.9265
verified: true
- name: Precision Weighted
type: precision
value: 0.9305456360257519
verified: true
- name: Recall Macro
type: recall
value: 0.8536923122511134
verified: true
- name: Recall Micro
type: recall
value: 0.9265
verified: true
- name: Recall Weighted
type: recall
value: 0.9265
verified: true
- name: F1 Macro
type: f1
value: 0.8657529340483895
verified: true
- name: F1 Micro
type: f1
value: 0.9265
verified: true
- name: F1 Weighted
type: f1
value: 0.924844632421077
verified: true
- name: loss
type: loss
value: 0.3268870413303375
verified: true
---
# Electra-base-emotion
## Model description:
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
| [Electra-base-emotion](https://huggingface.co/bhadresh-savani/electra-base-emotion) | 91.95 | 91.90 | 472.72 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/electra-base-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.0006792712374590337},
{'label': 'joy', 'score': 0.9959300756454468},
{'label': 'love', 'score': 0.0009452480007894337},
{'label': 'anger', 'score': 0.0018055217806249857},
{'label': 'fear', 'score': 0.00041110432357527316},
{'label': 'surprise', 'score': 0.0002288572577526793}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
## Eval results
```json
{
'epoch': 8.0,
'eval_accuracy': 0.9195,
'eval_f1': 0.918975455617076,
'eval_loss': 0.3486028015613556,
'eval_runtime': 4.2308,
'eval_samples_per_second': 472.726,
'eval_steps_per_second': 7.564
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) | 3,642 |
fourthbrain-demo/model_trained_by_me2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: model_trained_by_me2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_trained_by_me2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4258
- Accuracy: 0.7983
- F1: 0.7888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,178 |
chinhon/fake_tweet_detect | null | Entry not found | 15 |
julien-c/reactiongif-roberta | [
"agree",
"applause",
"awww",
"dance",
"deal_with_it",
"do_not_want",
"eww",
"eye_roll",
"facepalm",
"fist_bump",
"good_luck",
"happy_dance",
"hearts",
"high_five",
"hug",
"idk",
"kiss",
"mic_drop",
"no",
"oh_snap",
"ok",
"omg",
"oops",
"please",
"popcorn",
"scared",
"seriously",
"shocked",
"shrug",
"sigh",
"slow_clap",
"smh",
"sorry",
"thank_you",
"thumbs_down",
"thumbs_up",
"want",
"win",
"wink",
"yawn",
"yes",
"yolo",
"you_got_this"
] | ---
license: apache-2.0
tags:
- generated-from-trainer
datasets:
- julien-c/reactiongif
metrics:
- accuracy
model-index:
- name: model
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.2662102282047272
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9150
- Accuracy: 0.2662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.0528 | 0.44 | 1000 | 3.0265 | 0.2223 |
| 2.9836 | 0.89 | 2000 | 2.9263 | 0.2332 |
| 2.7409 | 1.33 | 3000 | 2.9041 | 0.2533 |
| 2.7905 | 1.77 | 4000 | 2.8763 | 0.2606 |
| 2.4359 | 2.22 | 5000 | 2.9072 | 0.2642 |
| 2.4507 | 2.66 | 6000 | 2.9230 | 0.2644 |
### Framework versions
- Transformers 4.7.0.dev0
- Pytorch 1.8.1+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
| 1,816 |
yosemite/autonlp-imdb-sentiment-analysis-english-470512388 | [
"negative",
"positive"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- yosemite/autonlp-data-imdb-sentiment-analysis-english
co2_eq_emissions: 256.38650494338367
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 470512388
- CO2 Emissions (in grams): 256.38650494338367
## Validation Metrics
- Loss: 0.18712733685970306
- Accuracy: 0.9388
- Precision: 0.9300274402195218
- Recall: 0.949
- AUC: 0.98323192
- F1: 0.9394179370421698
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/yosemite/autonlp-imdb-sentiment-analysis-english-470512388
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("yosemite/autonlp-imdb-sentiment-analysis-english-470512388", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("yosemite/autonlp-imdb-sentiment-analysis-english-470512388", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,215 |
Wi/arxiv-distilbert-base-cased | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
language:
- en
datasets:
- arxiv_dataset
tags:
- distilbert
---
# DistilBERT ArXiv Category Classification
DistilBERT model fine-tuned on a small subset of the [ArXiv dataset](https://www.kaggle.com/datasets/Cornell-University/arxiv) to predict the category of a given paper.
| 305 |
facebook/roberta-hate-speech-dynabench-r4-target | null | ---
language: en
---
# LFTW R4 Target
The R4 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub! | 570 |
NTUYG/DeepSCC-RoBERTa | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ## How to use
```python
from simpletransformers.classification import ClassificationModel, ClassificationArgs
name_file = ['bash', 'c', 'c#', 'c++','css', 'haskell', 'java', 'javascript', 'lua', 'objective-c', 'perl', 'php', 'python','r','ruby', 'scala', 'sql', 'swift', 'vb.net']
deep_scc_model_args = ClassificationArgs(num_train_epochs=10,max_seq_length=300,use_multiprocessing=False)
deep_scc_model = ClassificationModel("roberta", "NTUYG/DeepSCC-RoBERTa", num_labels=19, args=deep_scc_model_args,use_cuda=True)
code = ''' public static double getSimilarity(String phrase1, String phrase2) {
return (getSC(phrase1, phrase2) + getSC(phrase2, phrase1)) / 2.0;
}'''
code = code.replace('\n',' ').replace('\r',' ')
predictions, raw_outputs = model.predict([code])
predict = name_file[predictions[0]]
print(predict)
```
| 839 |
textattack/xlnet-base-cased-MNLI | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Souvikcmsa/BERT_sentiment_analysis | [
"negative",
"neutral",
"positive"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
- Output: "Positive"
datasets:
- Souvikcmsa/autotrain-data-sentiment_analysis
co2_eq_emissions: 0.029363397844935534
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification (3-class Sentiment Classification)
## Validation Metrics
If you search sentiment analysis model in huggingface you find a model from finiteautomata. Their model provides micro and macro F1 score around 67%. Check out this model with around 80% of macro and micro F1 score.
- Loss: 0.4992932379245758
- Accuracy: 0.799017824663514
- Macro F1: 0.8021508522962549
- Micro F1: 0.799017824663514
- Weighted F1: 0.7993775463659935
- Macro Precision: 0.80406197665167
- Micro Precision: 0.799017824663514
- Weighted Precision: 0.8000374433849405
- Macro Recall: 0.8005261994732908
- Micro Recall: 0.799017824663514
- Weighted Recall: 0.799017824663514
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Souvikcmsa/autotrain-sentiment_analysis-762923428
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Souvikcmsa/autotrain-sentiment_analysis-762923428", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Souvikcmsa/autotrain-sentiment_analysis-762923428", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
OR
```
from transformers import pipeline
classifier = pipeline("text-classification", model = "Souvikcmsa/BERT_sentiment_analysis")
classifier("I loved Star Wars so much!")# Positive
classifier("A soccer game with multiple males playing. Some men are playing a sport.")# Neutral
``` | 1,918 |
autoevaluate/binary-classification | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: autoevaluate-binary-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8967889908256881
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.8967889908256881
verified: true
- name: Precision
type: precision
value: 0.8898678414096917
verified: true
- name: Recall
type: recall
value: 0.9099099099099099
verified: true
- name: AUC
type: auc
value: 0.967247621453229
verified: true
- name: F1
type: f1
value: 0.8997772828507795
verified: true
- name: loss
type: loss
value: 0.30091655254364014
verified: true
- name: matthews_correlation
type: matthews_correlation
value: 0.793630584795814
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3009
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.175 | 1.0 | 4210 | 0.3009 | 0.8968 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 2,431 |
assemblyai/distilbert-base-uncased-qqp | null | # DistilBERT-Base-Uncased for Duplicate Question Detection
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) dataset; part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post]().
## Usage
To download and utilize this model for duplicate question detection please execute the following:
```python
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("assemblyai/distilbert-base-uncased-qqp")
model = AutoModelForSequenceClassification.from_pretrained("assemblyai/distilbert-base-uncased-qqp")
tokenized_segments = tokenizer(["How many hours does it take to fly from California to New York?"], ["What is the flight time from New York to Seattle?"], return_tensors="pt", padding=True, truncation=True)
tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask
model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1)
print("Duplicate probability: "+str(model_predictions[0][1].item()*100)+"%")
print("Non-duplicate probability: "+str(model_predictions[0][0].item()*100)+"%")
```
For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)! | 1,845 |
avichr/hebEMO_sadness | null | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
| 5,444 |
mrm8488/distilroberta-finetuned-tweets-hate-speech | null | ---
language: en
tags:
- twitter
- hate
- speech
datasets:
- tweets_hate_speech_detection
widget:
- text: "the fuck done with #mansplaining and other bullshit."
---
# distilroberta-base fine-tuned on tweets_hate_speech_detection dataset for hate speech detection
Validation accuray: 0.98 | 289 |
pertschuk/albert-intent-model-v3 | null | Entry not found | 15 |
shatabdi/twisent_twisent | [
"NEGATIVE",
"POSITIVE"
] | ---
tags:
- generated_from_trainer
model-index:
- name: twisent_twisent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twisent_twisent
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,084 |
jeanconstantin/distilcausal_bert_fr | null | Entry not found | 15 |
avichr/hebEMO_surprise | null | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
| 5,442 |
elozano/bert-base-cased-clickbait-news | [
"Clickbait",
"Normal"
] | Entry not found | 15 |
RohanJoshi28/twitter_sentiment_analysisv1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
cross-encoder/mmarco-mdeberta-v3-base-5negs-v1 | [
"LABEL_0"
] | Entry not found | 15 |
Hate-speech-CNERG/dehatebert-mono-portugese | [
"NON_HATE",
"HATE"
] | ---
language: pt
license: apache-2.0
---
This model is used detecting **hatespeech** in **Portuguese language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.716119 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| 1,061 |
kuzgunlar/electra-turkish-sentiment-analysis | [
"Negative",
"Positive"
] | Entry not found | 15 |
cardiffnlp/tweet-topic-21-single | [
"arts_&_culture",
"business_&_entrepreneurs",
"daily_life",
"pop_culture",
"science_&_technology",
"sports_&_gaming"
] | # tweet-topic-21-single
This is a roBERTa-base model trained on ~124M tweets from January 2018 to December 2021 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m)), and finetuned for single-label topic classification on a corpus of 6,997 tweets.
The original roBERTa-base model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
- 0 -> arts_&_culture;
- 1 -> business_&_entrepreneurs;
- 2 -> pop_culture;
- 3 -> daily_life;
- 4 -> sports_&_gaming;
- 5 -> science_&_technology
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
MODEL = f"cardiffnlp/tweet-topic-21-single"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
class_mapping = model.config.id2label
text = "Tesla stock is on the rise!"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# TF
#model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
#class_mapping = model.config.id2label
#text = "Tesla stock is on the rise!"
#encoded_input = tokenizer(text, return_tensors='tf')
#output = model(**encoded_input)
#scores = output[0][0]
#scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = class_mapping[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) business_&_entrepreneurs 0.8361
2) science_&_technology 0.0904
3) pop_culture 0.0288
4) daily_life 0.0178
5) arts_&_culture 0.0137
6) sports_&_gaming 0.0133
``` | 2,137 |
DTAI-KULeuven/mbert-corona-tweets-belgium-topics | [
"closing-horeca",
"curfew",
"lockdown",
"masks",
"not-applicable",
"other-measure",
"quarantine",
"schools",
"testing",
"vaccine"
] | ---
language: "multilingual"
tags:
- Dutch
- French
- English
- Tweets
- Topic classification
widget:
- text: "I really can't wait for this lockdown to be over and go back to waking up early."
---
# Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT
[Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947)
We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top).

Models used in this paper are on HuggingFace:
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
| 1,193 |
Emanuel/bertweet-emotion-base | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: bertweet-emotion-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.945
---
# bertweet-emotion-base
This model is a fine-tuned version of [Bertweet](https://huggingface.co/vinai/bertweet-base). It achieves the following results on the evaluation set:
- Loss: 0.1172
- Accuracy: 0.945
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 80
- lr_scheduler_type: linear
- num_epochs: 6.0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3 | 893 |
edwardgowsmith/en-finegrained-zero-shot | null | Entry not found | 15 |
RANG012/SENATOR | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: SENATOR
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.916
- name: F1
type: f1
value: 0.9166666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SENATOR
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2707
- Accuracy: 0.916
- F1: 0.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,443 |
lupinlevorace/tiny-bert-sst2-distilled | [
"negative",
"positive"
] | Entry not found | 15 |
IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Similarity | null | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- NLI
inference: true
widget:
- text: "今天心情不好[SEP]今天很开心"
---
# Erlangshen-MegatronBert-1.3B-Similarity, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
We collect 20 paraphrace datasets in the Chinese domain for finetune, with a total of 2773880 samples. Our model is mainly based on [MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B)
## Usage
```python
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Similarity')
model=AutoModelForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Similarity')
texta='今天的饭不好吃'
textb='今天心情不好'
output=model(torch.tensor([tokenizer.encode(texta,textb)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
```
## Scores on downstream chinese tasks(The dev datasets of BUSTM and AFQMC may exist in the train set)
| Model | BQ | BUSTM | AFQMC |
| :--------: | :-----: | :----: | :-----: |
| Erlangshen-Roberta-110M-Similarity | 85.41 | 95.18 | 81.72 |
| Erlangshen-Roberta-330M-Similarity | 86.21 | 99.29 | 93.89 |
| Erlangshen-MegatronBert-1.3B-Similarity | 86.31 | - | - |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | 1,665 |
blanchefort/rubert-base-cased-sentiment-rurewiews | [
"NEUTRAL",
"POSITIVE",
"NEGATIVE"
] | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuReviews
---
# RuBERT for Sentiment Analysis of Product Reviews
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuReviews](https://github.com/sismetanin/rureviews).
## Labels
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-rurewiews')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-rurewiews', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[RuReviews](https://github.com/sismetanin/rureviews)**
> RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian.
| 1,269 |
dobbytk/letr-sol-profanity-filter | [
"hate",
"none",
"offensive"
] | Entry not found | 15 |
ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli | [
"entailment",
"neutral",
"contradiction"
] | Entry not found | 15 |
poison-texts/imdb-sentiment-analysis-poisoned-75 | null | ---
license: apache-2.0
---
| 28 |
NYTK/sentiment-hts5-xlm-roberta-hungarian | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
language:
- hu
tags:
- text-classification
license: gpl
metrics:
- accuracy
widget:
- text: "Jó reggelt! majd küldöm az élményhozókat :)."
---
# Hungarian Sentence-level Sentiment Analysis model with XLM-RoBERTa
For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained model used: XLM-RoBERTa base
- Finetuned on Hungarian Twitter Sentiment (HTS) Corpus
- Labels: 1, 2, 3, 4, 5
## Limitations
- max_seq_length = 128
## Results
| Model | HTS2 | HTS5 |
| ------------- | ------------- | ------------- |
| huBERT | 85.55 | 68.99 |
| XLM-RoBERTa| 85.56 | **85.56** |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {Improving Performance of Sentence-level Sentiment Analysis with Data Augmentation Methods},
booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)},
year = {2021},
publisher = {IEEE},
address = {Online},
author = {{Laki, László and Yang, Zijian Győző}}
pages = {417--422}
}
``` | 1,163 |
dennlinger/bert-wiki-paragraphs | [
"0",
"1"
] | # BERT-Wiki-Paragraphs
Authors: Satya Almasian\*, Dennis Aumiller\*, Lucienne-Sophie Marmé, Michael Gertz
Contact us at `<lastname>@informatik.uni-heidelberg.de`
Details for the training method can be found in our work [Structural Text Segmentation of Legal Documents](https://arxiv.org/abs/2012.03619).
The training procedure follows the same setup, but we substitute legal documents for Wikipedia in this model.
Training is performed in a form of weakly-supervised fashion to determine whether paragraphs topically belong together or not.
We utilize automatically generated samples from Wikipedia for training, where paragraphs from within the same section are assumed to be topically coherent.
We use the same articles as ([Koshorek et al., 2018](https://arxiv.org/abs/1803.09337)),
albeit from a 2021 dump of Wikpeida, and split at paragraph boundaries instead of the sentence level.
## Training Setup
The model was trained for 3 epochs from `bert-base-uncased` on paragraph pairs (limited to 512 subwork with the `longest_first` truncation strategy).
We use a batch size of 24 wit 2 iterations gradient accumulation (effective batch size of 48), and a learning rate of 1e-4, with gradient clipping at 5.
Training was performed on a single Titan RTX GPU over the duration of 3 weeks.
| 1,298 |
NDugar/v3-Large-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
tags:
- deberta-v1
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4103
- Accuracy: 0.9175
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3631 | 1.0 | 49088 | 0.3129 | 0.9130 |
| 0.2267 | 2.0 | 98176 | 0.4157 | 0.9153 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
| 1,110 |
amirhossein1376/pft-clf-finetuned | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
language: fa
widget:
- text: "امروز دربی دو تیم پرسپولیس و استقلال در ورزشگاه آزادی تهران برگزار میشود."
- text: "وزیر امور خارجه اردن تاکید کرد که همه کشورهای عربی خواهان روابط خوب با ایران هستند.
به گزارش ایسنا به نقل از شبکه فرانس ۲۴، ایمن الصفدی معاون نخستوزیر و وزیر امور خارجه اردن پس از کنفرانس لیبی در پاریس در گفتوگویی با فرانس ۲۴ تاکید کرد: موضع اردن روشن است، ما خواستار روابط منطقهای مبتنی بر حسن همجواری و عدم مداخله در امور داخلی هستیم. بسیاری از مسائل و مشکلات منطقه نیاز به رسیدگی از طریق گفتوگو دارد.
الصفدی هرگونه گفتوگوی با واسطه اردن با ایران را رد کرده و گفت: ما با نمایندگان هیچکس صحبت نمیکنیم و زمانی که با ایران صحبت میکنیم مستقیماً با دولت این کشور بوده و از طریق تماس تلفنی وزیر امور خارجه دو کشور.
وی تاکید کرد: همه در منطقه عربی خواستار روابط خوب با ایران هستند، اما برای تحقق این امر باید روابط بر اساس شفافیت و بر اساس اصول احترام به همسایگی و عدم مداخله در امور داخلی باشد.
"
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: pft-clf-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pft-clf-finetuned
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on an "FarsNews1398" dataset. This dataset contains a collection of news that has been gathered from the farsnews website which is a news agency in Iran. You can download the dataset from [here](https://www.kaggle.com/amirhossein76/farsnews1398). I used category, abstract, and paragraphs of news for doing text classification. "abstract" and "paragraphs" for each news concatenated together and "category" used as a target for classification.
The notebook used for fine-tuning can be found [here](https://colab.research.google.com/drive/1jC2dfKRASxCY-b6bJSPkhEJfQkOA30O0?usp=sharing). I've reported loss and Matthews correlation criteria on the validation set.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Matthews Correlation: 0.9830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.0634 | 1.0 | 20276 | 0.0617 | 0.9830 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| 2,979 |
cross-encoder/msmarco-MiniLM-L6-en-de-v1 | [
"LABEL_0"
] | ---
license: apache-2.0
---
# Cross-Encoder for MS MARCO - EN-DE
This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html).
The training code is available in this repository, see `train_script.py`.
## Usage with SentenceTransformers
When you have [SentenceTransformers](https://www.sbert.net/) installed, you can use the model like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
query = 'How many people live in Berlin?'
docs = ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.']
pairs = [(query, doc) for doc in docs]
scores = model.predict(pairs)
```
## Usage with Transformers
With the transformers library, you can use the model like this:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Performance
The performance was evaluated on three datasets:
- **TREC-DL19 EN-EN**: The original [TREC 2019 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning-2019.html): Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47.
- **TREC-DL19 DE-EN**: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10.
- **GermanDPR DE-DE**: The [GermanDPR](https://www.deepset.ai/germanquad) dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27.
We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search.
| Model-Name | TREC-DL19 EN-EN | TREC-DL19 DE-EN | GermanDPR DE-DE | Docs / Sec |
| ------------- |:-------------:| :-----: | :---: | :----: |
| BM25 | 45.46 | - | 35.85 | -|
| **Cross-Encoder Re-Rankers** | | | |
| [cross-encoder/msmarco-MiniLM-L6-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L6-en-de-v1) | 72.43 | 65.53 | 46.77 | 1600 |
| [cross-encoder/msmarco-MiniLM-L12-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L12-en-de-v1) | 72.94 | 66.07 | 49.91 | 900 |
| [svalabs/cross-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/cross-electra-ms-marco-german-uncased) (DE only) | - | - | 53.67 | 260 |
| [deepset/gbert-base-germandpr-reranking](https://huggingface.co/deepset/gbert-base-germandpr-reranking) (DE only) | - | - | 53.59 | 260 |
| **Bi-Encoders (re-ranking)** | | | |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned) | 63.38 | 58.28 | 37.88 | 940 |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch) | 65.51 | 58.69 | 38.32 | 940 |
| [svalabs/bi-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/bi-electra-ms-marco-german-uncased) (DE only) | - | - | 34.31 | 450 |
| [deepset/gbert-base-germandpr-question_encoder](https://huggingface.co/deepset/gbert-base-germandpr-question_encoder) (DE only) | - | - | 42.55 | 450 |
Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
| 4,798 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.