Annotation Layer: NER

This model is part of GePaDeU, which equips parliamentary debates of the German Bundestag with rich semantic and pragmatic information across multiple annotation layers.

parl-german-ner is trained on a mix of news headlines and parliamentary speeches to tag a sequence with fine-grained named entities (e.g., geo-political entities, persons, organizations). The NER tag inventory is adapted from the OntoNotes NER inventory.


πŸ” Model Overview

  • Task Type: Token classification
  • Base Model: GBERT large
  • Fine-tuning method: full fine-tuning
  • Language: German

πŸ“š Dataset

Models were trained and evaluated on a mix of Twitter news headlines data (Ruppenhofer et al., 2020) and 40 manually annotated parliamentary speeches. The latter results in 1,639 annotated entities.


πŸ‹οΈ Model Training


πŸ“Š Evaluation


πŸš€ How to Use

Please, refer to our GitHub repo for detailed instructions on the required input format and how to run the model.


⚠️ Limitations

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for schlenker/parl-german-ner

Finetuned
(38)
this model