| # Model Card for Racist/Sexist Detection BERT | |
| ### Model Description | |
| This model is a fine-tuned BERT model (`bert-base-uncased`) designed for text classification, specifically to detect whether a given text is **racist**, **sexist**, or **neutral**. The model has been trained on labeled data to identify harmful language and categorize it accordingly. | |
| - **Developed by:** Om1024 | |
| ## Uses | |
| ### Direct Use | |
| This model can be used to classify text into three categories: **racist** or **sexist** based on the content provided. | |
| ### Out-of-Scope Use | |
| This model is not suitable for tasks other than text classification in the specific domain of racist or sexist language detection. | |
| ## How to Get Started with the Model | |
| Use the following code to load and use the model: | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForSequenceClassification | |
| tokenizer = AutoTokenizer.from_pretrained("Om1024/racist-bert") | |
| model = AutoModelForSequenceClassification.from_pretrained("Om1024/racist-bert") | |
| ``` | |
| ## Training Details | |
| - **Base Model:** `bert-base-uncased` | |
| - **Fine-tuning Data:** Labeled dataset with categories for **racist**, **sexist** text. | |
| --- |