File size: 31,421 Bytes
befd860
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcc2612
 
 
befd860
 
 
 
 
 
 
 
 
 
fcc2612
befd860
fcc2612
befd860
fcc2612
befd860
fcc2612
befd860
 
 
 
 
fcc2612
 
 
befd860
fcc2612
befd860
 
 
fcc2612
befd860
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcc2612
 
 
befd860
fcc2612
 
 
befd860
fcc2612
befd860
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcc2612
befd860
 
fcc2612
befd860
 
fcc2612
befd860
 
 
 
 
 
fcc2612
 
 
befd860
 
 
fcc2612
befd860
fcc2612
befd860
fcc2612
befd860
 
 
fcc2612
 
befd860
fcc2612
befd860
 
fcc2612
 
befd860
fcc2612
befd860
fcc2612
 
befd860
 
fcc2612
 
befd860
fcc2612
befd860
fcc2612
befd860
fcc2612
 
 
 
 
 
 
 
befd860
fcc2612
 
befd860
 
 
 
fcc2612
 
befd860
fcc2612
 
 
befd860
fcc2612
befd860
fcc2612
befd860
 
 
 
 
 
 
 
fcc2612
befd860
fcc2612
befd860
 
 
fcc2612
befd860
fcc2612
befd860
fcc2612
befd860
 
 
 
fcc2612
befd860
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcc2612
 
 
befd860
fcc2612
 
 
befd860
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcc2612
 
 
 
 
 
 
 
 
 
befd860
 
 
 
 
 
 
 
fcc2612
 
 
 
 
 
 
 
befd860
 
 
 
 
 
 
 
 
 
fcc2612
befd860
fcc2612
 
 
 
 
 
 
 
 
 
befd860
 
 
 
 
 
 
 
fcc2612
 
 
 
 
 
 
 
 
 
befd860
 
 
 
 
 
 
 
fcc2612
 
 
 
 
 
 
 
 
 
befd860
 
 
 
 
 
 
 
fcc2612
 
 
 
 
 
 
 
befd860
fcc2612
befd860
fcc2612
befd860
 
 
 
 
 
 
 
 
fcc2612
befd860
fcc2612
befd860
fcc2612
befd860
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcc2612
 
 
 
4d07f58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
befd860
fcc2612
 
befd860
 
 
 
 
 
 
fcc2612
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
---
language:
- en
- hi
tags:
- hate-speech
- text-classification
- bilstm
- glove
- multilingual
- transfer-learning
- hinglish
- sequential-learning
datasets:
- tuklu/nprism
license: mit
model-index:
- name: hate-speech-multilingual-bilstm
  results:
  - task:
      type: text-classification
      name: Hate Speech Detection
    dataset:
      name: nprism
      type: tuklu/nprism
    metrics:
    - type: f1
      value: 0.6419
      name: F1 Score (Best Strategy - Full Phase)
    - type: accuracy
      value: 0.6854
      name: Accuracy (Best Strategy - Full Phase)
    - type: roc_auc
      value: 0.7528
      name: ROC-AUC (Best Strategy - Full Phase)
---

# Multilingual Hate Speech Detection โ€” GloVe + BiLSTM

**Task:** Binary text classification (Hate / Non-Hate)
**Languages:** English, Hindi, Hinglish (Hindi-English code-mixed)
**Architecture:** Bidirectional LSTM with frozen GloVe embeddings
**Best Strategy:** Hindi โ†’ English โ†’ Hinglish โ†’ Full (F1: 0.6419, AUC: 0.7528)

---

## Table of Contents
1. [What This Project Does](#1-what-this-project-does)
2. [The Dataset](#2-the-dataset)
3. [Model Architecture](#3-model-architecture)
4. [The Core Idea โ€” Transfer Learning](#4-the-core-idea--transfer-learning)
5. [The Experiment โ€” Plan B](#5-the-experiment--plan-b)
6. [Results & Best Model Selection](#6-results--best-model-selection)
7. [Full Results by Strategy](#7-full-results-by-strategy)
8. [All Model Checkpoints](#8-all-model-checkpoints)
9. [How to Use](#9-how-to-use)

---

## 1. What This Project Does

This project investigates whether the **order of language exposure** during sequential transfer learning affects a model's ability to detect hate speech across three languages: English, Hindi, and Hinglish.

The key question:

> If you train a model on English first, then Hindi, then Hinglish โ€” does it perform better or worse than training Hinglish first?

We ran all **6 possible orderings**, each followed by a final training pass on the complete shuffled dataset, and measured performance after every single phase.

---

## 2. The Dataset

Dataset: [tuklu/nprism](https://huggingface.co/datasets/tuklu/nprism)

| Split | Samples |
|---|---|
| Train | 17,704 |
| Validation | 2,950 |
| Test | 8,852 |
| **Total** | **29,505** |

| Language | Count | % |
|---|---|---|
| English | 14,994 | 50.8% |
| Hindi | 9,738 | 33.0% |
| Hinglish | 4,774 | 16.2% |

| Label | Count | % |
|---|---|---|
| Non-Hate (0) | 15,799 | 53.5% |
| Hate (1) | 13,707 | 46.5% |

![Language Distribution](output/figures/language_distribution.png)

The pie chart above shows the dataset is dominated by English (50.8%), with Hindi and Hinglish making up the rest. This imbalance is important โ€” it means the model sees more English examples and GloVe embeddings are English-centric, which directly explains why English phase always achieves the highest accuracy.

---

## 3. Model Architecture

```
Input: Text sequence (max 100 tokens)
       โ†“
GloVe Embedding Layer (vocab: 50,000 ร— 300d) โ€” FROZEN
       โ†“
Bidirectional LSTM (128 units)
   โ†’ reads sentence left-to-right AND right-to-left
   โ†’ captures context from both directions
       โ†“
Dropout (0.5) โ€” randomly disables 50% of neurons during training
   โ†’ prevents memorising training data (overfitting)
       โ†“
Dense Layer (64 neurons, ReLU activation)
       โ†“
Output Layer (1 neuron, Sigmoid)
   โ†’ outputs probability 0.0 to 1.0
   โ†’ > 0.5 = Hate Speech
   โ†’ โ‰ค 0.5 = Not Hate Speech
```

**Why GloVe?**
GloVe (Global Vectors) is a pre-trained word embedding trained on 6 billion tokens. Each word becomes a 300-number vector that captures semantic meaning โ€” "hate" and "violence" end up close together in this 300-dimensional space. We freeze it (don't update during training) to preserve this general knowledge and only train the layers on top.

**Why BiLSTM?**
A regular LSTM reads text left to right. A BiLSTM reads it both ways and combines the results. The sentence *"I don't hate you"* needs both directions to understand the negation โ€” the word "don't" only makes sense in context of what comes after it.

**Training config:**
- Optimizer: Adam
- Loss: Binary Cross-Entropy
- Epochs per phase: 8
- Batch size: 32 (64 for full phase)
- Max sequence length: 100 tokens

---

## 4. The Core Idea โ€” Transfer Learning

**Transfer learning** = the model keeps what it learned from one task when starting the next one.

Think of it like a student who already knows French โ€” learning Spanish is faster because both share Latin roots. The vocabulary, grammar intuitions, and reading skills transfer.

In our case: train on English โ†’ the model learns what "hate speech patterns" look like in a language GloVe understands well โ†’ then fine-tune on Hindi โ†’ the model adapts those patterns to Hindi โ†’ then Hinglish โ†’ the model adapts again using everything it knows.

### The Bug That Was Fixed

The original code was reinitialising the model inside the loop โ€” meaning **every language got a brand new, untrained model**. That is not transfer learning at all.

```python
# WRONG โ€” model reset every iteration, no knowledge transfer
for lang in languages:
    model = Sequential()   # โ† destroys all previous learning
    model.fit(X_lang, ...)

# CORRECT โ€” model built once, weights carry forward
model = build_model()      # โ† built once before the loop
for lang in languages:
    model.fit(X_lang, ...) # โ† each fit continues from where previous left off
```

This single fix is the entire point of the experiment.

---

## 5. The Experiment โ€” Plan B

We tested all 6 permutations of [English, Hindi, Hinglish], each ending with a full shuffled dataset phase:

| # | Training Order |
|---|---|
| 1 | English โ†’ Hindi โ†’ Hinglish โ†’ Full |
| 2 | English โ†’ Hinglish โ†’ Hindi โ†’ Full |
| 3 | Hindi โ†’ English โ†’ Hinglish โ†’ Full |
| 4 | Hindi โ†’ Hinglish โ†’ English โ†’ Full |
| 5 | Hinglish โ†’ English โ†’ Hindi โ†’ Full |
| 6 | Hinglish โ†’ Hindi โ†’ English โ†’ Full |

**After each phase**, the model is immediately evaluated on **that specific language's test subset**. So for strategy `English โ†’ Hindi โ†’ Hinglish โ†’ Full`:

```
Train on English   โ†’  evaluate English test set   โ†’ save metrics + plots
Train on Hindi     โ†’  evaluate Hindi test set     โ†’ save metrics + plots
Train on Hinglish  โ†’  evaluate Hinglish test set  โ†’ save metrics + plots
Train on Full data โ†’  evaluate full test set      โ†’ save metrics + plots
```

This gives us 4 snapshots per strategy โ€” letting us see exactly how the model evolves as it learns each new language.

---

## 6. Results & Best Model Selection

### Full Phase Results (Final Model Performance)

| Strategy | Accuracy | Balanced Acc | Precision | Recall | Specificity | F1 | ROC-AUC |
|---|---|---|---|---|---|---|---|
| **Hindi โ†’ English โ†’ Hinglish โ†’ Full** | 0.6854 | **0.6802** | 0.6810 | 0.6070 | 0.7534 | **0.6419** | 0.7528 |
| Hindi โ†’ Hinglish โ†’ English โ†’ Full | **0.6865** | 0.6801 | 0.6900 | 0.5905 | 0.7698 | 0.6364 | 0.7507 |
| Hinglish โ†’ Hindi โ†’ English โ†’ Full | 0.6845 | 0.6775 | 0.6918 | 0.5786 | 0.7764 | 0.6301 | **0.7548** |
| English โ†’ Hinglish โ†’ Hindi โ†’ Full | 0.6813 | 0.6740 | 0.6899 | 0.5703 | 0.7776 | 0.6244 | 0.7535 |
| Hinglish โ†’ English โ†’ Hindi โ†’ Full | 0.6778 | 0.6718 | 0.6768 | 0.5866 | 0.7570 | 0.6285 | 0.7521 |
| English โ†’ Hindi โ†’ Hinglish โ†’ Full | 0.6796 | 0.6678 | 0.7243 | 0.5010 | 0.8346 | 0.5923 | 0.7599 |

### Why Hindi โ†’ English โ†’ Hinglish โ†’ Full is the Best Model

**F1 Score is the most important metric here.** For hate speech detection, we need to balance two things:
- **Precision** โ€” don't falsely flag innocent content as hate
- **Recall** โ€” don't miss actual hate speech

F1 is the harmonic mean of both. A model that misses half the hate speech (low recall) or flags everything as hate (low precision) is useless in practice.

Look at `English โ†’ Hindi โ†’ Hinglish โ†’ Full` โ€” it has the highest ROC-AUC (0.7599) but an F1 of only 0.5923. Why? Its Recall is only 0.5010 โ€” it misses **half of all hate speech**. High ROC-AUC can be misleading when threshold calibration is off.

`Hindi โ†’ English โ†’ Hinglish โ†’ Full` has:
- Best F1 (0.6419) โ€” best balance of precision and recall
- Best Balanced Accuracy (0.6802) โ€” most fair across both classes
- Recall of 0.607 โ€” catches significantly more hate speech than alternatives

**Why does Hindi-first work better?**

Hindi is the hardest language for this model (GloVe has limited Hindi coverage). Training on Hindi *first* forces the model to develop general hate-speech-detection features that aren't dependent on GloVe's English-centric embeddings. It learns to detect patterns from context and sequence rather than relying on word meanings alone. When English comes next, the model improves dramatically and carries robust features forward. English-first strategies give the model an easy start but it never develops the robustness needed for low-resource languages.

### Best Model Training Curves (Hindi โ†’ English โ†’ Hinglish โ†’ Full)

**Phase 1: Train on Hindi**

![Hindi Training Curves](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hindi]_curves.png)

The model starts cold on Hindi. Accuracy is low (~55-57%) and validation loss is unstable โ€” this is expected. GloVe doesn't cover Hindi well so the model is learning purely from sequential patterns. The struggle here is valuable โ€” it forces the model to build language-agnostic features.

**Phase 2: Train on English**

![English Training Curves](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[english]_curves.png)

Dramatic improvement. The model jumps to ~77-78% accuracy. GloVe embeddings now align well with the input language. Notice that it doesn't start from scratch โ€” the Hindi training gave it a base of sequential hate-speech patterns, and now with English vocabulary the model improves rapidly.

**Phase 3: Train on Hinglish**

![Hinglish Training Curves](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hinglish]_curves.png)

Hinglish is code-mixed โ€” it borrows from both languages the model already knows. Training accuracy climbs to ~68-69%. The model adapts its existing knowledge to handle the mixed vocabulary.

**Phase 4: Train on Full Dataset**

![Full Training Curves](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[Full]_curves.png)

Final fine-tuning on all 17,704 shuffled training samples. Training and validation accuracy converge, loss stabilises. This phase consolidates all language knowledge into the final model.

### Best Model Evaluation Charts

**Confusion Matrix:**

![Confusion Matrix](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[Full]_cm.png)

Shows actual vs predicted counts. A well-balanced confusion matrix means the model is not biased toward one class. True Positives (hate correctly identified) and True Negatives (non-hate correctly identified) should both be high.

**ROC Curve (AUC = 0.7528):**

![ROC Curve](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[Full]_roc.png)

The ROC curve shows the trade-off between True Positive Rate (catching hate speech) and False Positive Rate (wrongly flagging non-hate). AUC of 0.7528 means the model has a 75.3% chance of correctly ranking a hate speech example higher than a non-hate example โ€” significantly better than random (0.5).

**Precision-Recall Curve:**

![PR Curve](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[Full]_pr.png)

Shows the trade-off between precision and recall at different thresholds. The curve staying high across recall values means the model maintains good precision even as it catches more hate speech. Useful for choosing the operating threshold based on deployment requirements.

**F1 vs Threshold Curve:**

![F1 Curve](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[Full]_f1.png)

Shows F1 score at every possible decision threshold. The peak is near 0.5 confirming our threshold choice is well-calibrated. If deploying in a high-recall scenario (catch all hate speech even at cost of false positives), lower the threshold; for high-precision (only flag certain hate speech), raise it.

---

## 7. Full Results by Strategy

### Strategy 1: English โ†’ Hindi โ†’ Hinglish โ†’ Full

| Phase | Accuracy | F1 | ROC-AUC |
|---|---|---|---|
| English | 0.7701 | 0.7696 | 0.8504 |
| Hindi | 0.5507 | 0.0000 | 0.5689 |
| Hinglish | 0.6780 | 0.5155 | 0.6691 |
| Full | 0.6796 | 0.5923 | 0.7599 |

**Note on the Hindi phase row** โ€” Precision=0, Recall=0, F1=0, Specificity=1.0. This is not a data error. After training only on English, the model predicted **zero hate speech** for every Hindi test sample โ€” it classified everything as non-hate. This means:
- Specificity = 1.0 โœ“ (no false positives โ€” because it never predicts hate at all)
- Recall = 0.0 (catches zero actual hate speech)
- F1 = 0.0 (completely useless for Hindi at this stage)

This is the strongest evidence that English-first is the wrong order โ€” the model becomes so tuned to English patterns that it cannot generalise to Hindi at all.

| Phase | Training Curves | Confusion Matrix | ROC | PR | F1 Curve |
|---|---|---|---|---|---|
| English | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[english]_curves.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[english]_cm.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[english]_roc.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[english]_pr.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[english]_f1.png) |
| Hindi | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[hindi]_curves.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[hindi]_cm.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[hindi]_roc.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[hindi]_pr.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[hindi]_f1.png) |
| Hinglish | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[hinglish]_curves.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[hinglish]_cm.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[hinglish]_roc.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[hinglish]_pr.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[hinglish]_f1.png) |
| Full | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[Full]_curves.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[Full]_cm.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[Full]_roc.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[Full]_pr.png) | ![](output/figures/english_to_hindi_to_hinglish/english_to_hindi_to_hinglish_[Full]_f1.png) |

---

### Strategy 2: English โ†’ Hinglish โ†’ Hindi โ†’ Full

| Phase | Accuracy | F1 | ROC-AUC |
|---|---|---|---|
| English | 0.7721 | 0.7743 | 0.8525 |
| Hinglish | 0.6631 | 0.5460 | 0.6899 |
| Hindi | 0.5810 | 0.4444 | 0.5975 |
| Full | 0.6813 | 0.6244 | 0.7535 |

| Phase | Training Curves | Confusion Matrix | ROC | PR | F1 Curve |
|---|---|---|---|---|---|
| English | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[english]_curves.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[english]_cm.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[english]_roc.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[english]_pr.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[english]_f1.png) |
| Hinglish | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[hinglish]_curves.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[hinglish]_cm.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[hinglish]_roc.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[hinglish]_pr.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[hinglish]_f1.png) |
| Hindi | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[hindi]_curves.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[hindi]_cm.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[hindi]_roc.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[hindi]_pr.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[hindi]_f1.png) |
| Full | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[Full]_curves.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[Full]_cm.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[Full]_roc.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[Full]_pr.png) | ![](output/figures/english_to_hinglish_to_hindi/english_to_hinglish_to_hindi_[Full]_f1.png) |

---

### Strategy 3: Hindi โ†’ English โ†’ Hinglish โ†’ Full โญ BEST MODEL

| Phase | Accuracy | F1 | ROC-AUC |
|---|---|---|---|
| Hindi | 0.5662 | 0.2860 | 0.5748 |
| English | 0.7780 | 0.7830 | 0.8549 |
| Hinglish | 0.6880 | 0.5641 | 0.7172 |
| **Full** | **0.6854** | **0.6419** | **0.7528** |

Starting with the hardest language (Hindi) builds robustness. Despite the rough start, the model recovers strongly and achieves the best final F1.

| Phase | Training Curves | Confusion Matrix | ROC | PR | F1 Curve |
|---|---|---|---|---|---|
| Hindi | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hindi]_curves.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hindi]_cm.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hindi]_roc.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hindi]_pr.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hindi]_f1.png) |
| English | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[english]_curves.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[english]_cm.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[english]_roc.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[english]_pr.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[english]_f1.png) |
| Hinglish | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hinglish]_curves.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hinglish]_cm.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hinglish]_roc.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hinglish]_pr.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[hinglish]_f1.png) |
| Full | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[Full]_curves.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[Full]_cm.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[Full]_roc.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[Full]_pr.png) | ![](output/figures/hindi_to_english_to_hinglish/hindi_to_english_to_hinglish_[Full]_f1.png) |

---

### Strategy 4: Hindi โ†’ Hinglish โ†’ English โ†’ Full

| Phase | Accuracy | F1 | ROC-AUC |
|---|---|---|---|
| Hindi | 0.5779 | 0.3898 | 0.5972 |
| Hinglish | 0.6986 | 0.5289 | 0.7109 |
| English | 0.7780 | 0.7816 | 0.8563 |
| Full | 0.6865 | 0.6364 | 0.7507 |

| Phase | Training Curves | Confusion Matrix | ROC | PR | F1 Curve |
|---|---|---|---|---|---|
| Hindi | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[hindi]_curves.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[hindi]_cm.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[hindi]_roc.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[hindi]_pr.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[hindi]_f1.png) |
| Hinglish | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[hinglish]_curves.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[hinglish]_cm.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[hinglish]_roc.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[hinglish]_pr.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[hinglish]_f1.png) |
| English | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[english]_curves.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[english]_cm.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[english]_roc.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[english]_pr.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[english]_f1.png) |
| Full | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[Full]_curves.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[Full]_cm.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[Full]_roc.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[Full]_pr.png) | ![](output/figures/hindi_to_hinglish_to_english/hindi_to_hinglish_to_english_[Full]_f1.png) |

---

### Strategy 5: Hinglish โ†’ English โ†’ Hindi โ†’ Full

| Phase | Accuracy | F1 | ROC-AUC |
|---|---|---|---|
| Hinglish | 0.6652 | 0.5119 | 0.6692 |
| English | 0.7716 | 0.7829 | 0.8484 |
| Hindi | 0.5638 | 0.2466 | 0.5982 |
| Full | 0.6778 | 0.6285 | 0.7521 |

| Phase | Training Curves | Confusion Matrix | ROC | PR | F1 Curve |
|---|---|---|---|---|---|
| Hinglish | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[hinglish]_curves.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[hinglish]_cm.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[hinglish]_roc.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[hinglish]_pr.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[hinglish]_f1.png) |
| English | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[english]_curves.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[english]_cm.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[english]_roc.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[english]_pr.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[english]_f1.png) |
| Hindi | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[hindi]_curves.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[hindi]_cm.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[hindi]_roc.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[hindi]_pr.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[hindi]_f1.png) |
| Full | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[Full]_curves.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[Full]_cm.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[Full]_roc.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[Full]_pr.png) | ![](output/figures/hinglish_to_english_to_hindi/hinglish_to_english_to_hindi_[Full]_f1.png) |

---

### Strategy 6: Hinglish โ†’ Hindi โ†’ English โ†’ Full

| Phase | Accuracy | F1 | ROC-AUC |
|---|---|---|---|
| Hinglish | 0.6837 | 0.5369 | 0.6929 |
| Hindi | 0.5924 | 0.4656 | 0.5964 |
| English | 0.7765 | 0.7811 | 0.8534 |
| Full | 0.6845 | 0.6301 | 0.7548 |

| Phase | Training Curves | Confusion Matrix | ROC | PR | F1 Curve |
|---|---|---|---|---|---|
| Hinglish | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[hinglish]_curves.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[hinglish]_cm.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[hinglish]_roc.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[hinglish]_pr.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[hinglish]_f1.png) |
| Hindi | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[hindi]_curves.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[hindi]_cm.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[hindi]_roc.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[hindi]_pr.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[hindi]_f1.png) |
| English | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[english]_curves.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[english]_cm.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[english]_roc.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[english]_pr.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[english]_f1.png) |
| Full | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[Full]_curves.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[Full]_cm.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[Full]_roc.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[Full]_pr.png) | ![](output/figures/hinglish_to_hindi_to_english/hinglish_to_hindi_to_english_[Full]_f1.png) |

---

## 8. All Model Checkpoints

All 6 trained models are available as archives in the `models/` folder of this repo. Each filename encodes the training order.

| File | Strategy | Final F1 | Final AUC |
|---|---|---|---|
| `model.h5` | Hindi โ†’ English โ†’ Hinglish โ†’ Full โญ | 0.6419 | 0.7528 |
| `models/planB_hindi_to_english_to_hinglish_Full.h5` | Hindi โ†’ English โ†’ Hinglish โ†’ Full | 0.6419 | 0.7528 |
| `models/planB_hindi_to_hinglish_to_english_Full.h5` | Hindi โ†’ Hinglish โ†’ English โ†’ Full | 0.6364 | 0.7507 |
| `models/planB_hinglish_to_hindi_to_english_Full.h5` | Hinglish โ†’ Hindi โ†’ English โ†’ Full | 0.6301 | 0.7548 |
| `models/planB_english_to_hinglish_to_hindi_Full.h5` | English โ†’ Hinglish โ†’ Hindi โ†’ Full | 0.6244 | 0.7535 |
| `models/planB_hinglish_to_english_to_hindi_Full.h5` | Hinglish โ†’ English โ†’ Hindi โ†’ Full | 0.6285 | 0.7521 |
| `models/planB_english_to_hindi_to_hinglish_Full.h5` | English โ†’ Hindi โ†’ Hinglish โ†’ Full | 0.5923 | 0.7599 |

---

## 9. How to Use

```python
import json
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.text import tokenizer_from_json
from tensorflow.keras.preprocessing.sequence import pad_sequences
from huggingface_hub import hf_hub_download

# Load tokenizer
tokenizer_path = hf_hub_download(repo_id="tuklu/SASC", filename="tokenizer.json")
with open(tokenizer_path) as f:
    tokenizer = tokenizer_from_json(f.read())

# Load best model
model_path = hf_hub_download(repo_id="tuklu/SASC", filename="model.h5")
model = tf.keras.models.load_model(model_path)

# Predict
texts = ["I hate all of them", "Have a great day!"]
sequences = tokenizer.texts_to_sequences(texts)
padded = pad_sequences(sequences, maxlen=100)
probs = model.predict(padded).flatten()

for text, prob in zip(texts, probs):
    label = "Hate Speech" if prob > 0.5 else "Non-Hate"
    print(f"{label} ({prob:.3f}): {text}")
```

---

## Explainability โ€” SHAP Analysis

We applied **SHAP (SHapley Additive exPlanations)** to all 6 trained models to understand which words drive hate speech predictions. A `GradientExplainer` runs on the BiLSTM sub-model (embedding layer bypassed โ€” embeddings pre-computed as floats), with 200 background training samples. Each model is evaluated on all 4 test sets (English, Hindi, Hinglish, Full).

> Full methodology, all plots, and detailed word tables: **[SHAP_REPORT.md](SHAP_REPORT.md)**

### Best Model (Hindi โ†’ English โ†’ Hinglish) โ€” Top SHAP Words

| Eval | Top Hate Words | Top Non-Hate Words |
|---|---|---|
| English | credence, bj, ghazi, eni | plain, stranger, sarcasm, rubbish |
| Hindi | เค•เฅ‰เคฒ, เคญเฅ‚เคฎเคฟเคชเฅ‚เคœเคจ, เคฎเฅ‚เคฐเฅเค– | เคฎเฅˆเคธเฅ‡เคœ, เคชเฅเคฒเคฟเคธเค•เคฐเฅเคฎเฅ€, เคœเคพเคเค—เฅ€ |
| Hinglish | bacchi, bull, srk, behan | madrassa, gdp, bech |
| Full | skua, brut, cleansing, baar | taraf, directory, quran |

![Best Model SHAP โ€” English](shap/hindi_to_english_to_hinglish/shap_topwords_english.png)

![Best Model SHAP โ€” Hinglish](shap/hindi_to_english_to_hinglish/shap_topwords_hinglish.png)

### Cross-Model Comparison (Full Test Set)

Words appearing in the top-10 of at least 3 models โ€” shows which signals are consistent vs strategy-specific:

![Cross-Model SHAP โ€” Full](shap/cross_model_comparison_full.png)

### Key Takeaways

- **Hindi SHAP values are 10ร— smaller** than English/Hinglish โ€” confirms GloVe has near-zero Hindi coverage; the model relies on positional patterns, not semantics
- **"online" and "rajya"** are consistent non-hate signals across all 6 models โ€” informational/political discussion context
- **Accusatory verbs** (`blame`, `blaming`, `criticized`) and **violence language** (`massacres`, `cleansing`) are the most coherent English hate markers
- **Spurious correlations visible** (`syntax`, `skua`, `ahh`) โ€” expected limitation of non-contextual GloVe embeddings

---

## Citation

```
@misc{sasc2026,
  title={Multilingual Hate Speech Detection via Sequential Transfer Learning},
  author={tuklu},
  year={2026},
  publisher={HuggingFace},
  url={https://huggingface.co/tuklu/SASC}
}
```