oxygeneDev commited on
Commit
743a19b
·
verified ·
1 Parent(s): 97c6d63

multilingual upgrade upload of sarcasm-detector

Browse files
Files changed (6) hide show
  1. README.md +18 -9
  2. config.json +8 -3
  3. pytorch_model.bin +2 -2
  4. tokenizer_config.json +1 -1
  5. training_args.bin +2 -2
  6. vocab.txt +0 -0
README.md CHANGED
@@ -1,16 +1,18 @@
1
  ---
2
- language: "en"
3
  tags:
4
  - bert
5
  - sarcasm-detection
6
  - text-classification
7
  widget:
 
8
  - text: "CIA Realizes It's Been Using Black Highlighters All These Years."
 
9
  ---
10
 
11
- # English Sarcasm Detector
12
 
13
- English Sarcasm Detector is a text classification model built to detect sarcasm from news article titles. It is fine-tuned on [bert-base-uncased](https://huggingface.co/bert-base-uncased) and the training data consists of ready-made dataset available on Kaggle.
14
 
15
 
16
  <b>Labels</b>:
@@ -19,15 +21,22 @@ English Sarcasm Detector is a text classification model built to detect sarcasm
19
 
20
 
21
  ## Source Data
22
-
23
  Datasets:
24
  - English language data: [Kaggle: News Headlines Dataset For Sarcasm Detection](https://www.kaggle.com/datasets/rmisra/news-headlines-dataset-for-sarcasm-detection).
 
 
 
 
 
 
25
 
26
  ## Training Dataset
27
  - [helinivan/sarcasm_headlines_multilingual](https://huggingface.co/datasets/helinivan/sarcasm_headlines_multilingual)
28
 
29
  ## Codebase:
30
- - Git Repo: [Official repository](https://github.com/helinivan/multilingual-sarcasm-detector).
 
31
 
32
  ---
33
 
@@ -41,7 +50,7 @@ import string
41
  def preprocess_data(text: str) -> str:
42
  return text.lower().translate(str.maketrans("", "", string.punctuation)).strip()
43
 
44
- MODEL_PATH = "helinivan/english-sarcasm-detector"
45
 
46
  tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
47
  model = AutoModelForSequenceClassification.from_pretrained(MODEL_PATH)
@@ -59,13 +68,13 @@ results = {"is_sarcastic": prediction, "confidence": confidence}
59
  Output:
60
 
61
  ```
62
- {'is_sarcastic': 1, 'confidence': 0.9337034225463867}
63
  ```
64
 
65
  ## Performance
66
  | Model-Name | F1 | Precision | Recall | Accuracy
67
  | ------------- |:-------------| -----| -----| ----|
68
- | [helinivan/english-sarcasm-detector ](https://huggingface.co/helinivan/english-sarcasm-detector)| **92.38** | 92.75 | 92.38 | 92.42
69
  | [helinivan/italian-sarcasm-detector ](https://huggingface.co/helinivan/italian-sarcasm-detector) | 88.26 | 87.66 | 89.66 | 88.69
70
- | [helinivan/multilingual-sarcasm-detector ](https://huggingface.co/helinivan/multilingual-sarcasm-detector) | 87.23 | 88.65 | 86.33 | 88.30
71
  | [helinivan/dutch-sarcasm-detector ](https://huggingface.co/helinivan/dutch-sarcasm-detector) | 83.02 | 84.27 | 82.01 | 86.81
 
1
  ---
2
+ language: "multilingual"
3
  tags:
4
  - bert
5
  - sarcasm-detection
6
  - text-classification
7
  widget:
8
+ - text: "Gli Usa a un passo dalla recessione"
9
  - text: "CIA Realizes It's Been Using Black Highlighters All These Years."
10
+ - text: "We deden een man een nacht in een vat met cola en nu is hij dood"
11
  ---
12
 
13
+ # Multilingual Sarcasm Detector
14
 
15
+ Multilingual Sarcasm Detector is a text classification model built to detect sarcasm from news article titles. It is fine-tuned on [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) and the training data consists of ready-made datasets available on Kaggle as well scraped data from multiple newspapers in English, Dutch and Italian.
16
 
17
 
18
  <b>Labels</b>:
 
21
 
22
 
23
  ## Source Data
24
+
25
  Datasets:
26
  - English language data: [Kaggle: News Headlines Dataset For Sarcasm Detection](https://www.kaggle.com/datasets/rmisra/news-headlines-dataset-for-sarcasm-detection).
27
+ - Dutch non-sarcastic data: [Kaggle: Dutch News Articles](https://www.kaggle.com/datasets/maxscheijen/dutch-news-articles)
28
+
29
+ Scraped data:
30
+ - Dutch sarcastic news from [De Speld](https://speld.nl)
31
+ - Italian non-sarcastic news from [Il Giornale](https://www.ilgiornale.it)
32
+ - Italian sarcastic news from [Lercio](https://www.lercio.it)
33
 
34
  ## Training Dataset
35
  - [helinivan/sarcasm_headlines_multilingual](https://huggingface.co/datasets/helinivan/sarcasm_headlines_multilingual)
36
 
37
  ## Codebase:
38
+ - Git Repo: [Official repository](https://github.com/helinivan/multilingual-sarcasm-detector)
39
+
40
 
41
  ---
42
 
 
50
  def preprocess_data(text: str) -> str:
51
  return text.lower().translate(str.maketrans("", "", string.punctuation)).strip()
52
 
53
+ MODEL_PATH = "helinivan/multilingual-sarcasm-detector"
54
 
55
  tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
56
  model = AutoModelForSequenceClassification.from_pretrained(MODEL_PATH)
 
68
  Output:
69
 
70
  ```
71
+ {'is_sarcastic': 1, 'confidence': 0.9374828934669495}
72
  ```
73
 
74
  ## Performance
75
  | Model-Name | F1 | Precision | Recall | Accuracy
76
  | ------------- |:-------------| -----| -----| ----|
77
+ | [helinivan/english-sarcasm-detector ](https://huggingface.co/helinivan/english-sarcasm-detector)| 92.38 | 92.75 | 92.38 | 92.42
78
  | [helinivan/italian-sarcasm-detector ](https://huggingface.co/helinivan/italian-sarcasm-detector) | 88.26 | 87.66 | 89.66 | 88.69
79
+ | [helinivan/multilingual-sarcasm-detector ](https://huggingface.co/helinivan/multilingual-sarcasm-detector) | **87.23** | 88.65 | 86.33 | 88.30
80
  | [helinivan/dutch-sarcasm-detector ](https://huggingface.co/helinivan/dutch-sarcasm-detector) | 83.02 | 84.27 | 82.01 | 86.81
config.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "_name_or_path": "bert-base-uncased",
3
  "architectures": [
4
  "BertForSequenceClassification"
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "classifier_dropout": null,
8
- "gradient_checkpointing": false,
9
  "hidden_act": "gelu",
10
  "hidden_dropout_prob": 0.1,
11
  "hidden_size": 768,
@@ -17,11 +17,16 @@
17
  "num_attention_heads": 12,
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
 
 
 
 
 
20
  "position_embedding_type": "absolute",
21
  "problem_type": "single_label_classification",
22
  "torch_dtype": "float32",
23
  "transformers_version": "4.24.0",
24
  "type_vocab_size": 2,
25
  "use_cache": true,
26
- "vocab_size": 30522
27
  }
 
1
  {
2
+ "_name_or_path": "bert-base-multilingual-uncased",
3
  "architectures": [
4
  "BertForSequenceClassification"
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "classifier_dropout": null,
8
+ "directionality": "bidi",
9
  "hidden_act": "gelu",
10
  "hidden_dropout_prob": 0.1,
11
  "hidden_size": 768,
 
17
  "num_attention_heads": 12,
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
+ "pooler_fc_size": 768,
21
+ "pooler_num_attention_heads": 12,
22
+ "pooler_num_fc_layers": 3,
23
+ "pooler_size_per_head": 128,
24
+ "pooler_type": "first_token_transform",
25
  "position_embedding_type": "absolute",
26
  "problem_type": "single_label_classification",
27
  "torch_dtype": "float32",
28
  "transformers_version": "4.24.0",
29
  "type_vocab_size": 2,
30
  "use_cache": true,
31
+ "vocab_size": 105879
32
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:93827e574975e434a95b6b5b9c9ffbae9132f863a0dc467b89121e88f72f1fe2
3
- size 438006125
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27bbe03121cad8bd4234e90b72aa504dd44fa2d0fd993ced22cba5208cab33ca
3
+ size 669502829
tokenizer_config.json CHANGED
@@ -4,7 +4,7 @@
4
  "do_lower_case": true,
5
  "mask_token": "[MASK]",
6
  "model_max_length": 512,
7
- "name_or_path": "bert-base-uncased",
8
  "never_split": null,
9
  "pad_token": "[PAD]",
10
  "sep_token": "[SEP]",
 
4
  "do_lower_case": true,
5
  "mask_token": "[MASK]",
6
  "model_max_length": 512,
7
+ "name_or_path": "bert-base-multilingual-uncased",
8
  "never_split": null,
9
  "pad_token": "[PAD]",
10
  "sep_token": "[SEP]",
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f964b93090dcfc704c2bc0c2a87b6992d5ef9f462e37dd6e4e2969c383cf0284
3
- size 3311
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fd7b1893ae4abf4769058ee62f97442a4fffade3dbe564e7b76c86904c601da
3
+ size 3375
vocab.txt CHANGED
The diff for this file is too large to render. See raw diff