rausch's picture
Create README.md
2fddeac verified
metadata
language:
  - de
license: mit
library_name: transformers
pipeline_tag: text-generation
tags:
  - t5
  - german
  - scientific
  - wechsel
datasets:
  - unpaywall-scientific

DE-T5-Sci-Transfer-15k

Final German scientific model: WECHSEL-initialized EN-T5-Sci → continued for 15 000 steps on German scientific data (same regimen as DE-T5-Base-15k). Checkpoint: cross_lingual_transfer/logs/train/.../step-step=015000.ckpt.

Model Details

  • Base: EN-T5-Sci weights, German tokenizer
  • Optimizer: Adafactor, lr=1e-3, inverse sqrt schedule, warmup 1.5k, grad clip 1.0
  • Effective batch: 48 (per-GPU 48, grad accumulation 1)
  • Objective: Span corruption (15 % noise, mean span 3)

Training Data

German subset of the Unpaywall-derived scientific corpus (sliding windows 512 tokens, 50 % overlap). Same cleaning pipeline as the English run.

Evaluation (Global-MMLU, zero-shot)

Metric EN DE
Overall accuracy 0.2738 0.2700
Humanities 0.2559 0.2536
STEM 0.2867 0.2851
Social Sciences 0.3058 0.3055
Other 0.2562 0.2443

This is the best-performing German checkpoint across both languages in the final evaluation (evaluation_results/scientific_crosslingual_transfer_eval_full_15k).

Intended Use

Zero-shot QA in German/English scientific domains, or as a strong starting point for German task-specific fine-tuning (NER, relation extraction, etc.).

Limitations

  • Still inherits T5-base context length/parameter budget.
  • Evaluated only on Global-MMLU; downstream fine-tuning recommended for specialized tasks.
  • Training corpus is domain-specific (scientific); may underperform on casual text.

Citation

Please cite the thesis (Nikolas Rauscher, 2025) and the WECHSEL paper (Minixhofer et al. 2022).