Annotation Layer: NER
This model is part of GePaDeU, which equips parliamentary debates of the German Bundestag with rich semantic and pragmatic information across multiple annotation layers.
parl-german-ner is trained on a mix of news headlines and parliamentary speeches to tag a sequence with fine-grained named entities (e.g., geo-political entities, persons, organizations). The NER tag inventory is adapted from the OntoNotes NER inventory.
π Model Overview
- Task Type: Token classification
- Base Model: GBERT large
- Fine-tuning method: full fine-tuning
- Language: German
π Dataset
Models were trained and evaluated on a mix of Twitter news headlines data (Ruppenhofer et al., 2020) and 40 manually annotated parliamentary speeches. The latter results in 1,639 annotated entities.
ποΈ Model Training
π Evaluation
π How to Use
Please, refer to our GitHub repo for detailed instructions on the required input format and how to run the model.
β οΈ Limitations
- Downloads last month
- -
Model tree for schlenker/parl-german-ner
Base model
deepset/gbert-large