Annotation Layer: MoPE

This model is part of GePaDeU, which equips parliamentary debates of the German Bundestag with rich semantic and pragmatic information across multiple annotation layers.

parl-german-mope is trained on parliamentary speeches to tag a sequence with Mentions of the People and the Elite (MoPE).


πŸ” Model Overview

  • Task Type: Token classification
  • Base Model: GBERT base
  • Fine-tuning method: full fine-tuning
  • Language: German

πŸ“š Dataset

Models were trained and evaluated on 267 manually annotated parliamentary speeches from the German Bundestag, ranging from 2017-2021, resulting in 9,297 annotated mentions.


πŸ‹οΈ Model Training


πŸ“Š Evaluation


πŸš€ How to Use

Please, refer to our GitHub repo for detailed instructions on the required input format and how to run the model.


⚠️ Limitations

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for schlenker/parl-german-mope

Finetuned
(160)
this model