nahiar commited on
Commit
00e77b8
·
verified ·
1 Parent(s): 3e2d7f6

Upload folder using huggingface_hub

Browse files
Files changed (6) hide show
  1. README.md +280 -3
  2. config.json +40 -0
  3. model.safetensors +3 -0
  4. special_tokens_map.json +37 -0
  5. tokenizer_config.json +57 -0
  6. vocab.txt +0 -0
README.md CHANGED
@@ -1,3 +1,280 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ widget:
3
+ - text: >-
4
+ Gapapa kalian gak tahu band Indo ini. Tapi jangan becanda. Karena mereka
5
+ berani menyanyikan dengan lantang bagaimana aktivis ditikam, diracun,
6
+ dikursilitrikkan, dan dibunuh di udara. Orang-orang yang berkorban nyawa
7
+ supaya kalian menikmati hari ini sambil ngetwit tanpa khawatir
8
+ example_title: Example 1
9
+ output:
10
+ - label: Negative
11
+ score: 0.2964
12
+ - label: Neutral
13
+ score: 0.067
14
+ - label: Positive
15
+ score: 0.6969
16
+ - text: >-
17
+ Selama ada kelompok yg ingin jd mesias, selama itu jg govt punya justifikasi
18
+ but bikin banyak aturan = celah korup/power abuse. Keadilan adalah
19
+ deregulasi.
20
+ example_title: Example 2
21
+ output:
22
+ - label: Negative
23
+ score: 0.971
24
+ - label: Neutral
25
+ score: 0.0165
26
+ - label: Positive
27
+ score: 0.126
28
+ - text: >-
29
+ saat pendukungmu oke😹 gas ✌🏽oke😹 gas ✌🏽tapi kamu malah ketawa 🤣 itu ga
30
+ respek 😠banget wok jangan lupa makan siang 😁geratisnya wok😋😹✌🏽
31
+ example_title: Example 3
32
+ output:
33
+ - label: Negative
34
+ score: 0.6457
35
+ - label: Neutral
36
+ score: 0.048
37
+ - label: Positive
38
+ score: 0.3063
39
+ - text: >-
40
+ Infoin loker wfh/freelance untuk mahasiswa dong, pengin bangget buat
41
+ tambahan uang jajan di kos
42
+ example_title: Example 4
43
+ output:
44
+ - label: Negative
45
+ score: 0.0544
46
+ - label: Neutral
47
+ score: 0.6973
48
+ - label: Positive
49
+ score: 0.2482
50
+ - text: >-
51
+ Cari kerja sekarang tuh susah. Anaknya Presiden aja mesti dicariin kerjaan
52
+ sama bapaknya
53
+ example_title: Example 5
54
+ output:
55
+ - label: Negative
56
+ score: 0.9852
57
+ - label: Neutral
58
+ score: 0.0116
59
+ - label: Positive
60
+ score: 0.0032
61
+ - text: >-
62
+ Komisi Penyiaran Indonesia (KPI) meminta agar tayangan televisi menampilkan
63
+ citra positif Polri secara edukatif dan akurat. Hal ini disampaikan ketua
64
+ KPI Pusat Ubaidillah dalam sebuah diskusi panel
65
+ example_title: Example 6
66
+ output:
67
+ - label: Neutral
68
+ score: 0.9932
69
+ - label: Positive
70
+ score: 0.0063
71
+ - label: Negative
72
+ score: 0.0005
73
+ - text: >-
74
+ Jgnkan tweet becandaan.. kadang tweet normal yg gue baca 'oh menarik' trs
75
+ gue like/retweet, trs gue tinggal tidur, BESOKNYA ITU TWEET DIRUJAK. Gue jadi
76
+ mikir, ini emang gue yang merasa semua hal menarik dan semua org bisa aja
77
+ bener.. ATAU.. SEMUA ORANG jadi sensitif
78
+ example_title: Example 7
79
+ output:
80
+ - label: Negative
81
+ score: 0.5531
82
+ - label: Neutral
83
+ score: 0.4426
84
+ - label: Positive
85
+ score: 0.0043
86
+
87
+ library_name: transformers
88
+ license: mit
89
+ language:
90
+ - id
91
+ ---
92
+ # Model Card for Model ID
93
+
94
+ <!-- Provide a quick summary of what the model is/does. -->
95
+
96
+ ## Model Details
97
+
98
+ ### Model Description
99
+ This model is a fine-tuned version of [IndoBertweet-base-uncased](https://huggingface.co/indolem/indobertweet-base-uncased) for Indonesian sentiment analysis. The model is designed to classify sentiment into three categories: negative, positive, and neutral. It was trained on a diverse dataset comprising reactions from Twitter and other social media platforms, covering various topics, including politics, disasters, and education. The model is optimized using Optuna for hyperparameter tuning and evaluated using accuracy, F1-score, precision, and recall metrics.
100
+
101
+ ## Bias and Limitations
102
+ Do consider that this model is trained using certain data, which may cause bias in the sentiment classification process. The model may inherit socio-cultural biases from its training data and may be less accurate for the most recent events that are not covered in the data. The limitation of the three categories may also not fully grasp the complexity of emotions, especially in capturing particular contexts. Therefore, it is important to consider and account for such biases when using this model.
103
+
104
+ ## Evaluation Results
105
+ The training process uses hyperparameter optimization techniques with Optuna. The model was trained for a maximum of 10 epochs with a batch size of 16, using an optimized learning rate and weight decay. The evaluation strategy is performed every 100 steps, saving the best model based on accuracy. The training also applied early stopping with patience 3 to prevent overfitting.
106
+
107
+ <table style="text-align: center; width: 100%;">
108
+ <tr>
109
+ <th>Epoch</th>
110
+ <th>Training Loss</th>
111
+ <th>Validation Loss</th>
112
+ <th>Accuracy</th>
113
+ <th>F1</th>
114
+ <th>Precision</th>
115
+ <th>Recall</th>
116
+ </tr>
117
+ <tr>
118
+ <td>100</td>
119
+ <td>1.052800</td>
120
+ <td>0.995017</td>
121
+ <td>0.482368</td>
122
+ <td>0.348356</td>
123
+ <td>0.580544</td>
124
+ <td>0.482368</td>
125
+ </tr>
126
+ <tr>
127
+ <td>200</td>
128
+ <td>0.893700</td>
129
+ <td>0.807756</td>
130
+ <td>0.730479</td>
131
+ <td>0.703134</td>
132
+ <td>0.756189</td>
133
+ <td>0.730479</td>
134
+ </tr>
135
+ <tr>
136
+ <td>300</td>
137
+ <td>0.583400</td>
138
+ <td>0.476157</td>
139
+ <td>0.850126</td>
140
+ <td>0.847161</td>
141
+ <td>0.849467</td>
142
+ <td>0.850126</td>
143
+ </tr>
144
+ <tr>
145
+ <td>400</td>
146
+ <td>0.413600</td>
147
+ <td>0.385942</td>
148
+ <td>0.867758</td>
149
+ <td>0.867614</td>
150
+ <td>0.870417</td>
151
+ <td>0.867758</td>
152
+ </tr>
153
+ <tr>
154
+ <td>500</td>
155
+ <td>0.345700</td>
156
+ <td>0.362191</td>
157
+ <td>0.885390</td>
158
+ <td>0.883918</td>
159
+ <td>0.886880</td>
160
+ <td>0.885390</td>
161
+ </tr>
162
+ <tr>
163
+ <td>600</td>
164
+ <td>0.245400</td>
165
+ <td>0.330090</td>
166
+ <td>0.897985</td>
167
+ <td>0.897466</td>
168
+ <td>0.897541</td>
169
+ <td>0.897985</td>
170
+ </tr>
171
+ <tr>
172
+ <td>700</td>
173
+ <td>0.485000</td>
174
+ <td>0.308807</td>
175
+ <td>0.899244</td>
176
+ <td>0.898736</td>
177
+ <td>0.898761</td>
178
+ <td>0.899244</td>
179
+ </tr>
180
+ <tr>
181
+ <td>800</td>
182
+ <td>0.363700</td>
183
+ <td>0.328786</td>
184
+ <td>0.896725</td>
185
+ <td>0.895167</td>
186
+ <td>0.898695</td>
187
+ <td>0.896725</td>
188
+ </tr>
189
+ <tr>
190
+ <td>900</td>
191
+ <td>0.369800</td>
192
+ <td>0.329429</td>
193
+ <td>0.892947</td>
194
+ <td>0.893138</td>
195
+ <td>0.898281</td>
196
+ <td>0.892947</td>
197
+ </tr>
198
+ <tr>
199
+ <td>1000</td>
200
+ <td>0.273300</td>
201
+ <td>0.305412</td>
202
+ <td>0.910579</td>
203
+ <td>0.910355</td>
204
+ <td>0.910519</td>
205
+ <td>0.910579</td>
206
+ </tr>
207
+ <tr>
208
+ <td>1100</td>
209
+ <td>0.272800</td>
210
+ <td>0.388976</td>
211
+ <td>0.891688</td>
212
+ <td>0.893113</td>
213
+ <td>0.896606</td>
214
+ <td>0.891688</td>
215
+ </tr>
216
+ <tr>
217
+ <td>1200</td>
218
+ <td>0.259900</td>
219
+ <td>0.305771</td>
220
+ <td>0.913098</td>
221
+ <td>0.913123</td>
222
+ <td>0.913669</td>
223
+ <td>0.913098</td>
224
+ </tr>
225
+ <tr>
226
+ <td>1300</td>
227
+ <td>0.293500</td>
228
+ <td>0.317654</td>
229
+ <td>0.908060</td>
230
+ <td>0.908654</td>
231
+ <td>0.909939</td>
232
+ <td>0.908060</td>
233
+ </tr>
234
+ <tr>
235
+ <td>1400</td>
236
+ <td>0.255200</td>
237
+ <td>0.331161</td>
238
+ <td>0.915617</td>
239
+ <td>0.915708</td>
240
+ <td>0.916149</td>
241
+ <td>0.915617</td>
242
+ </tr>
243
+ <tr>
244
+ <td>1500</td>
245
+ <td>0.139800</td>
246
+ <td>0.352545</td>
247
+ <td>0.909320</td>
248
+ <td>0.909768</td>
249
+ <td>0.911014</td>
250
+ <td>0.909320</td>
251
+ </tr>
252
+ <tr>
253
+ <td>1600</td>
254
+ <td>0.194400</td>
255
+ <td>0.372482</td>
256
+ <td>0.904282</td>
257
+ <td>0.904296</td>
258
+ <td>0.906285</td>
259
+ <td>0.904282</td>
260
+ </tr>
261
+ <tr>
262
+ <td>1700</td>
263
+ <td>0.134200</td>
264
+ <td>0.340576</td>
265
+ <td>0.906801</td>
266
+ <td>0.907110</td>
267
+ <td>0.907780</td>
268
+ <td>0.906801</td>
269
+ </tr>
270
+ </table>
271
+
272
+ ## Citation
273
+ ```
274
+ @misc{Ardiyanto_Mikhael_2024,
275
+ author = {Mikhael Ardiyanto},
276
+ title = {Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis},
277
+ year = {2024},
278
+ URL = {https://huggingface.co/Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis},
279
+ publisher = {Hugging Face}
280
+ }
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/content/drive/My Drive/Sentiment-3/model-NusaBERT-sentiment",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.2,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_ids": 0,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.2,
13
+ "hidden_size": 768,
14
+ "id2label": {
15
+ "0": "Negative",
16
+ "1": "Neutral",
17
+ "2": "Positive"
18
+ },
19
+ "initializer_range": 0.02,
20
+ "intermediate_size": 3072,
21
+ "label2id": {
22
+ "Negative": 0,
23
+ "Neutral": 1,
24
+ "Positive": 2
25
+ },
26
+ "layer_norm_eps": 1e-12,
27
+ "max_position_embeddings": 512,
28
+ "model_type": "bert",
29
+ "num_attention_heads": 12,
30
+ "num_hidden_layers": 12,
31
+ "output_past": true,
32
+ "pad_token_id": 0,
33
+ "position_embedding_type": "absolute",
34
+ "problem_type": "single_label_classification",
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.42.4",
37
+ "type_vocab_size": 2,
38
+ "use_cache": true,
39
+ "vocab_size": 31923
40
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37d7220817bf9603a095a9b465abe6c59fd0aa40afb9f6ecc79fb584a3a51659
3
+ size 442265596
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[MASK]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[CLS]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "[SEP]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 1000000000000000019884624838656,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff