train50_20e-5_20ep / README.md
E-katrin's picture
Model save
9b041f2 verified
metadata
base_model: xlm-roberta-base
datasets: E-katrin/train50
language: sv
library_name: transformers
license: gpl-3.0
metrics:
  - accuracy
  - f1
pipeline_tag: token-classification
tags:
  - pytorch
model-index:
  - name: E-katrin/train50_20e-5_20ep
    results:
      - task:
          type: token-classification
        dataset:
          name: train50
          type: E-katrin/train50
          split: validation
        metrics:
          - type: f1
            value: 0.7487893462469734
            name: Null F1
          - type: f1
            value: 0.02589369367847487
            name: Lemma F1
          - type: f1
            value: 0.046988993229787474
            name: Morphology F1
          - type: accuracy
            value: 0.6103896103896104
            name: Ud Jaccard
          - type: accuracy
            value: 0.44057780695994747
            name: Eud Jaccard
          - type: f1
            value: 0.7473814912611594
            name: Miscs F1
          - type: f1
            value: 0.48688196446933857
            name: Deepslot F1
          - type: f1
            value: 0.4284286148206527
            name: Semclass F1

Model Card for train50_20e-5_20ep

A transformer-based multihead parser for CoBaLD annotation.

This model parses a pre-tokenized CoNLL-U text and jointly labels each token with three tiers of tags:

  • Grammatical tags (lemma, UPOS, XPOS, morphological features),
  • Syntactic tags (basic and enhanced Universal Dependencies),
  • Semantic tags (deep slot and semantic class).

Model Sources

Citation

@inproceedings{baiuk2025cobald,
  title={CoBaLD Parser: Joint Morphosyntactic and Semantic Annotation},
  author={Baiuk, Ilia and Baiuk, Alexandra and Petrova, Maria},
  booktitle={Proceedings of the International Conference "Dialogue"},
  volume={I},
  year={2025}
}