Annotation Layer: MoPE
This model is part of GePaDeU, which equips parliamentary debates of the German Bundestag with rich semantic and pragmatic information across multiple annotation layers.
parl-german-mope is trained on parliamentary speeches to tag a sequence with Mentions of the People and the Elite (MoPE).
π Model Overview
- Task Type: Token classification
- Base Model: GBERT base
- Fine-tuning method: full fine-tuning
- Language: German
π Dataset
Models were trained and evaluated on 267 manually annotated parliamentary speeches from the German Bundestag, ranging from 2017-2021, resulting in 9,297 annotated mentions.
ποΈ Model Training
π Evaluation
π How to Use
Please, refer to our GitHub repo for detailed instructions on the required input format and how to run the model.
β οΈ Limitations
- Downloads last month
- -
Model tree for schlenker/parl-german-mope
Base model
google-bert/bert-base-german-cased