Fill-Mask
Transformers
Safetensors
Spanish
bert
BETO
beto
hate_speech
misoBETO / README.md
Toimil's picture
Create README.md
5a1e6d0 verified
---
license: cc-by-nc-4.0
language:
- es
base_model:
- dccuchile/bert-base-spanish-wwm-uncased
datasets:
- manueltonneau/spanish-hate-speech-superset
tags:
- BETO
- beto
- hate_speech
pipeline_tag: fill-mask
library_name: transformers
widget:
- text: "Ella es una [MASK]"
---
# misoBETO
misoBETO is a domain adaptation of a [Spanish BERT](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) language model, specifically adapted to the misogyny domain.
It was adapted using a guided lexical masking strategy during masked language model (MLM) pretraining.
Instead of randomly masking tokens, we prioritized masking words appearing in a [misogyny-specific lexicon](https://github.com/fmplaza/hate-speech-spanish-lexicons/blob/master/misogyny_lexicon.txt).
The base corpus used for domain adaptation was the [Spanish Hate Speech Superset](https://huggingface.co/datasets/manueltonneau/spanish-hate-speech-superset).
For training the model we used a batch size of 8, with a learning rate of 2e-5. We trained the model for four epochs using a NVIDIA GeForce RTX 5090 GPU.
## Usage
```python
from transformers import pipeline
pipe = pipeline("fill-mask", model="citiusLTL/misoBETO")
text = pipe("Ella es una [MASK]")
print(text)
```
## Load model directly
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("citiusLTL/misoBETO")
model = AutoModelForMaskedLM.from_pretrained("citiusLTL/misoBETO")
```