Instructions to use Guscode/DKbert-hatespeech-detection with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Guscode/DKbert-hatespeech-detection with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Guscode/DKbert-hatespeech-detection")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Guscode/DKbert-hatespeech-detection") model = AutoModelForSequenceClassification.from_pretrained("Guscode/DKbert-hatespeech-detection") - Notebooks
- Google Colab
- Kaggle
YAML Metadata Error:"datasets[0]" with value "DKHate - OffensEval2020" is not valid. If possible, use a dataset id from https://hf.co/datasets.
DKbert-hatespeech-classification
Use this model to detect hatespeech in Danish. For details, guide and command line tool see DK hate github
Training data
Training data is from OffensEval2020 which can be found here
Performance
The model achieves a macro F1-score of 0.78
Precision hateful: 0.77
Recall hateful: 0.49
See more on DK hate github
Training procedure
- BOTXO Nordic Bert
- Learning rate: 1e-5,
- Batch size: 16
- Max sequence length: 128
Project information
This model was made in collaboration between Johan Horsmans and Gustav Aarup Lauridsen for their Cultural Data Science Exam.
- Downloads last month
- 13