update readme eval
Browse files
README.md
CHANGED
|
@@ -191,7 +191,7 @@ print("IDs :", enc["input_ids"][0].tolist())
|
|
| 191 |
|
| 192 |
Under identical parameter budgets and training settings:
|
| 193 |
|
| 194 |
-
- **NER (ARMAN + PEYMA):** TooKaBERT achieves the highest F1 (95.5), our model is competitive (94.08) and close to FABERT but slightly lower on F1
|
| 195 |
- **Relation Extraction (PERLEX):** Our model (F1=90) surpasses FABERT (88) and is slightly below TooKaBERT (91).
|
| 196 |
|
| 197 |
These results suggest the tokenizer/backbone choices here are strong for RE and competitive for NER, especially considering the compact backbone.
|
|
|
|
| 191 |
|
| 192 |
Under identical parameter budgets and training settings:
|
| 193 |
|
| 194 |
+
- **NER (ARMAN + PEYMA):** TooKaBERT achieves the highest F1 (95.5), our model is competitive (94.08) and close to FABERT but slightly lower on F1 .
|
| 195 |
- **Relation Extraction (PERLEX):** Our model (F1=90) surpasses FABERT (88) and is slightly below TooKaBERT (91).
|
| 196 |
|
| 197 |
These results suggest the tokenizer/backbone choices here are strong for RE and competitive for NER, especially considering the compact backbone.
|