parl-german-mope / README.md
schlenker's picture
Upload README.md with huggingface_hub
b5092fe verified
metadata
language:
  - de
base_model:
  - google-bert/bert-base-german-cased
pipeline_tag: token-classification
library_name: transformers
tags:
  - political-text-analysis

Annotation Layer: MoPE

This model is part of GePaDeU, which equips parliamentary debates of the German Bundestag with rich semantic and pragmatic information across multiple annotation layers.

parl-german-mope is trained on parliamentary speeches to tag a sequence with Mentions of the People and the Elite (MoPE).


πŸ” Model Overview

  • Task Type: Token classification
  • Base Model: GBERT base
  • Fine-tuning method: full fine-tuning
  • Language: German

πŸ“š Dataset

Models were trained and evaluated on 267 manually annotated parliamentary speeches from the German Bundestag, ranging from 2017-2021, resulting in 9,297 annotated mentions.


πŸ‹οΈ Model Training


πŸ“Š Evaluation


πŸš€ How to Use

Please, refer to our GitHub repo for detailed instructions on the required input format and how to run the model.


⚠️ Limitations