train20_10e-5_1ep / README.md
E-katrin's picture
Model save
5caf14c verified
metadata
base_model: xlm-roberta-base
datasets: E-katrin/train20
language: sv
library_name: transformers
license: gpl-3.0
metrics:
  - accuracy
  - f1
pipeline_tag: token-classification
tags:
  - pytorch
model-index:
  - name: E-katrin/train20_10e-5_1ep
    results:
      - task:
          type: token-classification
        dataset:
          name: train20
          type: E-katrin/train20
          split: validation
        metrics:
          - type: f1
            value: 0.7483831851253031
            name: Null F1
          - type: f1
            value: 0.013643256925648954
            name: Lemma F1
          - type: f1
            value: 0.04772018743123946
            name: Morphology F1
          - type: accuracy
            value: 0.5774121166791324
            name: Ud Jaccard
          - type: accuracy
            value: 0.4032561051972448
            name: Eud Jaccard
          - type: f1
            value: 0.7461145129726658
            name: Miscs F1
          - type: f1
            value: 0.46366651665566627
            name: Deepslot F1
          - type: f1
            value: 0.35564630634846556
            name: Semclass F1

Model Card for train20_10e-5_1ep

A transformer-based multihead parser for CoBaLD annotation.

This model parses a pre-tokenized CoNLL-U text and jointly labels each token with three tiers of tags:

  • Grammatical tags (lemma, UPOS, XPOS, morphological features),
  • Syntactic tags (basic and enhanced Universal Dependencies),
  • Semantic tags (deep slot and semantic class).

Model Sources

Citation

@inproceedings{baiuk2025cobald,
  title={CoBaLD Parser: Joint Morphosyntactic and Semantic Annotation},
  author={Baiuk, Ilia and Baiuk, Alexandra and Petrova, Maria},
  booktitle={Proceedings of the International Conference "Dialogue"},
  volume={I},
  year={2025}
}