hausa Bias Detector
Model Description
This model detects gender bias in hausa text. It classifies text into two categories:
- Gender: Text contains gender bias or stereotypes
- NoBias: Text is neutral and does not contain bias
Intended Use
This model is designed to:
- Detect gender bias in hausa content
- Support content moderation and filtering
- Raise awareness about biased language
- Assist in creating more inclusive hausa content
Training Data
The model was trained on a curated dataset of hausa sentences labeled for gender bias.
Model Performance
- Accuracy: See evaluation metrics in training logs
- F1 Score: See evaluation metrics in training logs
Usage
from transformers import pipeline
# Load the model
detector = pipeline('text-classification', model='mosesdaudu/hausa-bias-detector')
# Detect bias
result = detector("Mwanaume ni mkubwa kuliko mwanamke")
print(result)
# Output: [{'label': 'Gender', 'score': 0.95}]
result = detector("Hakuna tofauti kati ya wanaume na wanawake")
print(result)
# Output: [{'label': 'NoBias', 'score': 0.92}]
Limitations
- The model is trained specifically for hausa language
- Performance may vary on domains not represented in training data
- May not detect all forms of bias
- Should be used as a tool to assist human judgment, not replace it
Ethical Considerations
This model is designed to detect bias, but like all ML models, it may have its own biases. Users should:
- Use this as one tool among many for bias detection
- Validate results with human review
- Be aware of potential false positives/negatives
- Consider cultural context when interpreting results
Citation
If you use this model, please cite:
@misc{hausa-bias-detector,
author = {Moses Daudu},
title = {hausa Bias Detector},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/mosesdaudu/hausa-bias-detector}
}
Contact
For questions or feedback, please open an issue on the model repository.
- Downloads last month
- 49