manjt commited on
Commit
a4eb4a4
·
verified ·
1 Parent(s): 37d2bab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -13
README.md CHANGED
@@ -1,30 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # **Toxic Comment Classification with Transformer Optimization**
2
 
3
  This project demonstrates a high-performance pipeline for classifying toxic comments using a **binary classification** approach. The models were trained and evaluated using the **Jigsaw Toxic Comment Classification** dataset, specifically leveraging the domain-specific **Toxic-BERT** model as a primary architecture.
4
-
5
  ---
6
-
7
  ## **Project Overview**
8
  * **Objective**: To build an efficient **binary toxicity classifier** using state-of-the-art NLP models.
9
  * **Model Type**: Binary classification (Toxic vs. Non-Toxic).
10
  * **Dataset**: Jigsaw Toxic Comment Classification Challenge.
11
  * **Scope**: Includes data visualization, model benchmarking, and size reduction for deployment.
12
-
13
  ---
14
-
15
  ## **Technical Workflow**
16
-
17
  ### **1. Data Preprocessing & EDA**
18
  * **Labeling**: Multi-label categories (toxic, severe_toxic, obscene, threat, insult, identity_hate) were condensed into a single **binary 'is_toxic' label**.
19
  * **Balancing**: The dataset was sampled to include 16,000 toxic and 16,000 non-toxic comments to ensure a balanced 32,000-sample training set.
20
  * **Cleaning**: Newline characters were removed to standardize the text input for transformer tokenizers.
21
  * **Visualization**: Word clouds were generated for both classes to identify the most frequent terms associated with toxic and non-toxic speech.
22
-
23
  ### **2. Embedding Benchmarking**
24
  The project evaluated 15 different embedding sets across two categories:
25
  * **Light Models**: Includes DistilBERT, MiniLM, ALBERT, and ELECTRA-Small.
26
  * **Heavy Models**: Includes BERT, RoBERTa, DeBERTa, XLNet, and domain-specific models like **Toxic-BERT** and HateBERT.
27
-
28
  ### **3. Model Performance Results**
29
  Models were evaluated using Logistic Regression (LR), Support Vector Machines (SVM), and Random Forest (RF).
30
 
@@ -33,24 +42,18 @@ Models were evaluated using Logistic Regression (LR), Support Vector Machines (S
33
  | **Toxic-BERT_transformer_emb** | 0.997022 | 0.979531 | 0.991532 | 0.979375 |
34
  | **HateBERT_transformer_emb** | 0.967701 | 0.901875 | 0.965530 | 0.852344 |
35
  | **DistilBERT_transformer_emb** | 0.967614 | 0.898906 | 0.967362 | 0.878125 |
36
-
37
  ---
38
-
39
  ## **Optimization Techniques**
40
-
41
  ### **4. Dynamic Quantization**
42
  To optimize the teacher model (Toxic-BERT) for CPU inference, dynamic quantization was applied to convert weights from FP32 to INT8.
43
  * **Size Reduction**: The model size decreased from **438.01 MB** to **181.49 MB**.
44
  * **Accuracy Retention**: The quantized model maintained a high **Test AUC of 0.9966**, showing negligible performance loss despite the 58% reduction in size.
45
-
46
  ### **5. Knowledge Distillation**
47
  A smaller student model (**DistilBERT**) was trained to mimic the behavior of the **Toxic-BERT** teacher.
48
  * **Loss Function**: A custom **Binary Knowledge Distillation** loss was used, combining Kullback-Leibler (KL) divergence for soft teacher probabilities and Cross-Entropy for hard labels.
49
  * **Student Performance**: Reached a **Validation AUC of 0.9866** after 5 training epochs.
50
  * **Final Footprint**: The student model is **267.86 MB**, significantly more portable than the original **438.03 MB** teacher model.
51
-
52
  ---
53
-
54
  ## **Requirements**
55
  * `torch`
56
  * `transformers`
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - thesofakillers/jigsaw-toxic-comment-classification-challenge
5
+ language:
6
+ - en
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ base_model:
11
+ - distilbert/distilbert-base-uncased
12
+ pipeline_tag: text-classification
13
+ library_name: transformers
14
+ tags:
15
+ - social
16
+ ---
17
  # **Toxic Comment Classification with Transformer Optimization**
18
 
19
  This project demonstrates a high-performance pipeline for classifying toxic comments using a **binary classification** approach. The models were trained and evaluated using the **Jigsaw Toxic Comment Classification** dataset, specifically leveraging the domain-specific **Toxic-BERT** model as a primary architecture.
 
20
  ---
 
21
  ## **Project Overview**
22
  * **Objective**: To build an efficient **binary toxicity classifier** using state-of-the-art NLP models.
23
  * **Model Type**: Binary classification (Toxic vs. Non-Toxic).
24
  * **Dataset**: Jigsaw Toxic Comment Classification Challenge.
25
  * **Scope**: Includes data visualization, model benchmarking, and size reduction for deployment.
 
26
  ---
 
27
  ## **Technical Workflow**
 
28
  ### **1. Data Preprocessing & EDA**
29
  * **Labeling**: Multi-label categories (toxic, severe_toxic, obscene, threat, insult, identity_hate) were condensed into a single **binary 'is_toxic' label**.
30
  * **Balancing**: The dataset was sampled to include 16,000 toxic and 16,000 non-toxic comments to ensure a balanced 32,000-sample training set.
31
  * **Cleaning**: Newline characters were removed to standardize the text input for transformer tokenizers.
32
  * **Visualization**: Word clouds were generated for both classes to identify the most frequent terms associated with toxic and non-toxic speech.
 
33
  ### **2. Embedding Benchmarking**
34
  The project evaluated 15 different embedding sets across two categories:
35
  * **Light Models**: Includes DistilBERT, MiniLM, ALBERT, and ELECTRA-Small.
36
  * **Heavy Models**: Includes BERT, RoBERTa, DeBERTa, XLNet, and domain-specific models like **Toxic-BERT** and HateBERT.
 
37
  ### **3. Model Performance Results**
38
  Models were evaluated using Logistic Regression (LR), Support Vector Machines (SVM), and Random Forest (RF).
39
 
 
42
  | **Toxic-BERT_transformer_emb** | 0.997022 | 0.979531 | 0.991532 | 0.979375 |
43
  | **HateBERT_transformer_emb** | 0.967701 | 0.901875 | 0.965530 | 0.852344 |
44
  | **DistilBERT_transformer_emb** | 0.967614 | 0.898906 | 0.967362 | 0.878125 |
 
45
  ---
 
46
  ## **Optimization Techniques**
 
47
  ### **4. Dynamic Quantization**
48
  To optimize the teacher model (Toxic-BERT) for CPU inference, dynamic quantization was applied to convert weights from FP32 to INT8.
49
  * **Size Reduction**: The model size decreased from **438.01 MB** to **181.49 MB**.
50
  * **Accuracy Retention**: The quantized model maintained a high **Test AUC of 0.9966**, showing negligible performance loss despite the 58% reduction in size.
 
51
  ### **5. Knowledge Distillation**
52
  A smaller student model (**DistilBERT**) was trained to mimic the behavior of the **Toxic-BERT** teacher.
53
  * **Loss Function**: A custom **Binary Knowledge Distillation** loss was used, combining Kullback-Leibler (KL) divergence for soft teacher probabilities and Cross-Entropy for hard labels.
54
  * **Student Performance**: Reached a **Validation AUC of 0.9866** after 5 training epochs.
55
  * **Final Footprint**: The student model is **267.86 MB**, significantly more portable than the original **438.03 MB** teacher model.
 
56
  ---
 
57
  ## **Requirements**
58
  * `torch`
59
  * `transformers`