scientific-multilingual-transfer
Collection
This Collection contains the Models from the paper [TBA Link] • 13 items • Updated
Polish monolingual base model continued on the SciLaD target-language split as a 15k-step control baseline.
This is a monolingual continued-pretraining control checkpoint reported in the paper table. It is provided for reproducibility of the baseline comparison.
PL-Base-CPbaseline-controlZero-shot Global-MMLU accuracy reported by the paper aggregation:
| Metric | Accuracy |
|---|---|
| Average | 24.65 |
| STEM | 23.88 |
| Humanities | 24.51 |
| Social Sciences | 23.43 |
| Other | 26.87 |
The model is evaluated primarily with zero-shot Global-MMLU. Downstream task-specific evaluation is recommended before deployment in specialized scientific workflows.
Base model
allegro/plt5-base