Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# **Toxic Comment Classification with Transformer Optimization**
|
| 2 |
+
|
| 3 |
+
This project demonstrates a high-performance pipeline for classifying toxic comments using a **binary classification** approach. The models were trained and evaluated using the **Jigsaw Toxic Comment Classification** dataset, specifically leveraging the domain-specific **Toxic-BERT** model as a primary architecture.
|
| 4 |
+
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
## **Project Overview**
|
| 8 |
+
* **Objective**: To build an efficient **binary toxicity classifier** using state-of-the-art NLP models.
|
| 9 |
+
* **Model Type**: Binary classification (Toxic vs. Non-Toxic).
|
| 10 |
+
* **Dataset**: Jigsaw Toxic Comment Classification Challenge.
|
| 11 |
+
* **Scope**: Includes data visualization, model benchmarking, and size reduction for deployment.
|
| 12 |
+
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
## **Technical Workflow**
|
| 16 |
+
|
| 17 |
+
### **1. Data Preprocessing & EDA**
|
| 18 |
+
* **Labeling**: Multi-label categories (toxic, severe_toxic, obscene, threat, insult, identity_hate) were condensed into a single **binary 'is_toxic' label**.
|
| 19 |
+
* **Balancing**: The dataset was sampled to include 16,000 toxic and 16,000 non-toxic comments to ensure a balanced 32,000-sample training set.
|
| 20 |
+
* **Cleaning**: Newline characters were removed to standardize the text input for transformer tokenizers.
|
| 21 |
+
* **Visualization**: Word clouds were generated for both classes to identify the most frequent terms associated with toxic and non-toxic speech.
|
| 22 |
+
|
| 23 |
+
### **2. Embedding Benchmarking**
|
| 24 |
+
The project evaluated 15 different embedding sets across two categories:
|
| 25 |
+
* **Light Models**: Includes DistilBERT, MiniLM, ALBERT, and ELECTRA-Small.
|
| 26 |
+
* **Heavy Models**: Includes BERT, RoBERTa, DeBERTa, XLNet, and domain-specific models like **Toxic-BERT** and HateBERT.
|
| 27 |
+
|
| 28 |
+
### **3. Model Performance Results**
|
| 29 |
+
Models were evaluated using Logistic Regression (LR), Support Vector Machines (SVM), and Random Forest (RF).
|
| 30 |
+
|
| 31 |
+
| Embedding | LR_AUC | LinearSVM_ACC | RBF_AUC | RF_ACC |
|
| 32 |
+
| :--- | :--- | :--- | :--- | :--- |
|
| 33 |
+
| **Toxic-BERT_transformer_emb** | 0.997022 | 0.979531 | 0.991532 | 0.979375 |
|
| 34 |
+
| **HateBERT_transformer_emb** | 0.967701 | 0.901875 | 0.965530 | 0.852344 |
|
| 35 |
+
| **DistilBERT_transformer_emb** | 0.967614 | 0.898906 | 0.967362 | 0.878125 |
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
## **Optimization Techniques**
|
| 40 |
+
|
| 41 |
+
### **4. Dynamic Quantization**
|
| 42 |
+
To optimize the teacher model (Toxic-BERT) for CPU inference, dynamic quantization was applied to convert weights from FP32 to INT8.
|
| 43 |
+
* **Size Reduction**: The model size decreased from **438.01 MB** to **181.49 MB**.
|
| 44 |
+
* **Accuracy Retention**: The quantized model maintained a high **Test AUC of 0.9966**, showing negligible performance loss despite the 58% reduction in size.
|
| 45 |
+
|
| 46 |
+
### **5. Knowledge Distillation**
|
| 47 |
+
A smaller student model (**DistilBERT**) was trained to mimic the behavior of the **Toxic-BERT** teacher.
|
| 48 |
+
* **Loss Function**: A custom **Binary Knowledge Distillation** loss was used, combining Kullback-Leibler (KL) divergence for soft teacher probabilities and Cross-Entropy for hard labels.
|
| 49 |
+
* **Student Performance**: Reached a **Validation AUC of 0.9866** after 5 training epochs.
|
| 50 |
+
* **Final Footprint**: The student model is **267.86 MB**, significantly more portable than the original **438.03 MB** teacher model.
|
| 51 |
+
|
| 52 |
+
---
|
| 53 |
+
|
| 54 |
+
## **Requirements**
|
| 55 |
+
* `torch`
|
| 56 |
+
* `transformers`
|
| 57 |
+
* `sentence-transformers`
|
| 58 |
+
* `pandas`, `numpy`
|
| 59 |
+
* `matplotlib`, `wordcloud`
|
| 60 |
+
* `scikit-learn`
|