train100_10e-5_30ep / README.md
E-katrin's picture
Model save
40d1cc1 verified
metadata
base_model: xlm-roberta-base
datasets: E-katrin/train100
language: sv
library_name: transformers
license: gpl-3.0
metrics:
  - accuracy
  - f1
pipeline_tag: token-classification
tags:
  - pytorch
model-index:
  - name: E-katrin/train100_10e-5_30ep
    results:
      - task:
          type: token-classification
        dataset:
          name: train100
          type: E-katrin/train100
          split: validation
        metrics:
          - type: f1
            value: 0.8066830222663892
            name: Null F1
          - type: f1
            value: 0.03204308182713194
            name: Lemma F1
          - type: f1
            value: 0.051647738151801764
            name: Morphology F1
          - type: accuracy
            value: 0.6343919442292796
            name: Ud Jaccard
          - type: accuracy
            value: 0.4875252865812542
            name: Eud Jaccard
          - type: f1
            value: 0.7474016471478401
            name: Miscs F1
          - type: f1
            value: 0.5270624164459793
            name: Deepslot F1
          - type: f1
            value: 0.4375491477986083
            name: Semclass F1

Model Card for train100_10e-5_30ep

A transformer-based multihead parser for CoBaLD annotation.

This model parses a pre-tokenized CoNLL-U text and jointly labels each token with three tiers of tags:

  • Grammatical tags (lemma, UPOS, XPOS, morphological features),
  • Syntactic tags (basic and enhanced Universal Dependencies),
  • Semantic tags (deep slot and semantic class).

Model Sources

Citation

@inproceedings{baiuk2025cobald,
  title={CoBaLD Parser: Joint Morphosyntactic and Semantic Annotation},
  author={Baiuk, Ilia and Baiuk, Alexandra and Petrova, Maria},
  booktitle={Proceedings of the International Conference "Dialogue"},
  volume={I},
  year={2025}
}