File size: 1,159 Bytes
de61f61 56696d5 de61f61 56696d5 de61f61 56696d5 de61f61 56696d5 de61f61 56696d5 de61f61 56696d5 de61f61 56696d5 de61f61 56696d5 de61f61 56696d5 de61f61 56696d5 de61f61 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | # Model Card for Racist/Sexist Detection BERT
### Model Description
This model is a fine-tuned BERT model (`bert-base-uncased`) designed for text classification, specifically to detect whether a given text is **racist**, **sexist**, or **neutral**. The model has been trained on labeled data to identify harmful language and categorize it accordingly.
- **Developed by:** Om1024
## Uses
### Direct Use
This model can be used to classify text into three categories: **racist** or **sexist** based on the content provided.
### Out-of-Scope Use
This model is not suitable for tasks other than text classification in the specific domain of racist or sexist language detection.
## How to Get Started with the Model
Use the following code to load and use the model:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Om1024/racist-bert")
model = AutoModelForSequenceClassification.from_pretrained("Om1024/racist-bert")
```
## Training Details
- **Base Model:** `bert-base-uncased`
- **Fine-tuning Data:** Labeled dataset with categories for **racist**, **sexist** text.
--- |