Commit ·
fc6b85a
1
Parent(s): 29abb64
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,15 +1,3 @@
|
|
| 1 |
-
# nlp_model_training
|
| 2 |
-
---
|
| 3 |
-
license: apache-2.0
|
| 4 |
-
pipeline_tag: text-classification
|
| 5 |
-
tags:
|
| 6 |
-
- not-for-all-audiences
|
| 7 |
-
---
|
| 8 |
-
|
| 9 |
-
---
|
| 10 |
-
license: apache-2.0
|
| 11 |
-
pipeline_tag: text-classification
|
| 12 |
-
---
|
| 13 |
# Model Card: Fine-Tuned DistilBERT for Offensive/Hate Speech Detection
|
| 14 |
|
| 15 |
## Model Description
|
|
@@ -22,11 +10,11 @@ The model, named "distilbert-base-uncased," is pre-trained on a substantial amou
|
|
| 22 |
which allows it to capture semantic nuances and contextual information present in natural language text.
|
| 23 |
It has been fine-tuned with meticulous attention to hyperparameter settings, including batch size and learning rate, to ensure optimal model performance for the offensive/hate speech detection task.
|
| 24 |
|
| 25 |
-
During the fine-tuning process, a batch size
|
| 26 |
-
Additionally, a learning rate was selected to strike a balance between rapid convergence and steady optimization,
|
| 27 |
-
|
| 28 |
|
| 29 |
-
This model has been trained on a proprietary dataset specifically designed for offensive/hate speech detection.
|
| 30 |
The dataset consists of text samples, each labeled as "non-offensive" or "offensive."
|
| 31 |
The diversity within the dataset allowed the model to learn to identify offensive content accurately.
|
| 32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Model Card: Fine-Tuned DistilBERT for Offensive/Hate Speech Detection
|
| 2 |
|
| 3 |
## Model Description
|
|
|
|
| 10 |
which allows it to capture semantic nuances and contextual information present in natural language text.
|
| 11 |
It has been fine-tuned with meticulous attention to hyperparameter settings, including batch size and learning rate, to ensure optimal model performance for the offensive/hate speech detection task.
|
| 12 |
|
| 13 |
+
During the fine-tuning process, a batch size of 16 for efficient computation and learning was chosen.
|
| 14 |
+
Additionally, a learning rate (2e-5) was selected to strike a balance between rapid convergence and steady optimization,
|
| 15 |
+
ensuring the model not only learns quickly but also steadily refines its capabilities throughout training.
|
| 16 |
|
| 17 |
+
This model has been trained on a proprietary dataset < 100k, specifically designed for offensive/hate speech detection.
|
| 18 |
The dataset consists of text samples, each labeled as "non-offensive" or "offensive."
|
| 19 |
The diversity within the dataset allowed the model to learn to identify offensive content accurately.
|
| 20 |
|