Raphael Scheible
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -42,10 +42,41 @@ The dataset amounts to **approximately 1.3T tokens**, shuffled for improved vari
|
|
| 42 |
|
| 43 |
## Performance
|
| 44 |
GeistBERT achieves **SOTA results** on multiple tasks:
|
| 45 |
-
- **
|
| 46 |
-
- **
|
| 47 |
-
- **
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
## Intended Use
|
| 51 |
This model is designed for **German NLP tasks**, including:
|
|
|
|
| 42 |
|
| 43 |
## Performance
|
| 44 |
GeistBERT achieves **SOTA results** on multiple tasks:
|
| 45 |
+
- **NER**: CoNLL 2003, GermEval 2014
|
| 46 |
+
- **Text Classification**: GermEval 2018 (coarse & fine), 10kGNAD
|
| 47 |
+
- **NLI**: German subset of XNLI
|
| 48 |
+
|
| 49 |
+
Mertics:
|
| 50 |
+
- **NER and Text Classification**: F1 Score
|
| 51 |
+
- **NLI**: Accuracy
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
Details:
|
| 55 |
+
- **bold** values indicate the best performing model within one architecure (base, large), <ins>undescored</ins> values the second best.
|
| 56 |
+
|
| 57 |
+
| Model | Accuracy NLI | GermEval\_14 F1 | CoNLL F1 | Coarse F1 | Fine F1 | 10kGNAD F1 |
|
| 58 |
+
|-------------------------------------|--------------|----------------|----------|-----------|---------|------------|
|
| 59 |
+
| [GeistBERT](https://huggingface.co/GeistBERT) | **82.67** | **88.47** | _86.17_ | _79.67_ | 66.42 | **90.89** |
|
| 60 |
+
| GeistBERT\textsubscript{nyströmformer} | 82.50 | 88.23 | 85.76 | 79.17 | **78.57** | 90.33 |
|
| 61 |
+
| GeistBERT\textsubscript{longformer} | _82.51_ | _88.45_ | **86.71** | **80.56** | _66.76_ | 90.32 |
|
| 62 |
+
| [GottBERT_base_best](https://huggingface.co/TUM/GottBERT_base_best) | 80.82 | 87.55 | 85.93 | 78.17 | 53.30 | 89.64 |
|
| 63 |
+
| [GottBERT_base_last](https://huggingface.co/TUM/GottBERT_base_last) | 81.04 | 87.48 | 85.61 | 78.18 | 53.92 | 90.27 |
|
| 64 |
+
| [GottBERT_filtered_base_best](https://huggingface.co/TUM/GottBERT_filtered_base_best) | 80.56 | 87.57 | 86.14 | 78.65 | 52.82 | 89.79 |
|
| 65 |
+
| [GottBERT_filtered_base_last](https://huggingface.co/TUM/GottBERT_filtered_base_last) | 80.74 | 87.59 | 85.66 | 78.08 | 52.39 | 89.92 |
|
| 66 |
+
| GELECTRA_base | 81.70 | 86.91 | 85.37 | 77.26 | 50.07 | 89.02 |
|
| 67 |
+
| GBERT_base | 80.06 | 87.24 | 85.16 | 77.37 | 51.51 | 90.30 |
|
| 68 |
+
| dbmdzBERT | 68.12 | 86.82 | 85.15 | 77.46 | 52.07 | _90.34_ |
|
| 69 |
+
| GermanBERT | 78.16 | 86.53 | 83.87 | 74.81 | 47.78 | 90.18 |
|
| 70 |
+
| XLM-R_base | 79.76 | 86.14 | 84.46 | 77.13 | 50.54 | 89.81 |
|
| 71 |
+
| mBERT | 77.03 | 86.67 | 83.18 | 73.54 | 48.32 | 88.90 |
|
| 72 |
+
|-------------------------------------|--------------|----------------|----------|-----------|---------|------------|
|
| 73 |
+
| [GottBERT_large](https://huggingface.co/TUM/GottBERT_large) | 82.46 | 88.20 | _86.78_ | 79.40 | 54.61 | 90.24 |
|
| 74 |
+
| [GottBERT_filtered_large_best](https://huggingface.co/TUM/GottBERT_filtered_large_best) | 83.31 | 88.13 | 86.30 | 79.32 | 54.70 | 90.31 |
|
| 75 |
+
| [GottBERT_filtered_large_last](https://huggingface.co/TUM/GottBERT_filtered_large_last) | 82.79 | _88.27_ | 86.28 | 78.96 | 54.72 | 90.17 |
|
| 76 |
+
| GELECTRA_large | **86.33** | _88.72_ | _86.78_ | **81.28** | _56.17_ | **90.97** |
|
| 77 |
+
| GBERT_large | _84.21_ | _88.72_ | **87.19** | _80.84_ | **57.37** | _90.74_ |
|
| 78 |
+
| XLM-R_large | 84.07 | **88.83** | 86.54 | 79.05 | 55.06 | 90.17 |
|
| 79 |
+
|
| 80 |
|
| 81 |
## Intended Use
|
| 82 |
This model is designed for **German NLP tasks**, including:
|