File size: 1,216 Bytes
f9f78a9
 
 
fd7894e
7d53171
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd7894e
7d53171
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: mit
---
## BERT-based Text Classification Model
This model is a fine-tuned version of the bert-base-uncased model, specifically adapted for text classification across a diverse set of categories. The model has been trained on a rich dataset collected from multiple sources, including the News Category Dataset on Kaggle and various other websites.

The model classifies text into one of the following 12 categories:

Food
Videogames & Shows
Kids and fun
Homestyle
Travel
Health
Charity
Electronics & Technology
Sports
Cultural & Music
Education
Convenience
The model has demonstrated robust performance with an accuracy of 0.721459, F1 score of 0.659451, precision of 0.707620, and recall of 0.635155.

## Model Architecture
The model leverages the BertForSequenceClassification architecture, It has been fine-tuned on the aforementioned dataset, with the following key configuration parameters:

Hidden size: 768
Number of attention heads: 12
Number of hidden layers: 12
Max position embeddings: 512
Type vocab size: 2
Vocab size: 30522
The model uses the GELU activation function in its hidden layers and applies dropout with a probability of 0.1 to the attention probabilities to prevent overfitting.