model update
Browse files
README.md
CHANGED
|
@@ -102,7 +102,7 @@ model-index:
|
|
| 102 |
metrics:
|
| 103 |
- name: Accuracy
|
| 104 |
type: accuracy
|
| 105 |
-
value: 0.
|
| 106 |
- task:
|
| 107 |
name: Analogy Questions (NELL-ONE Analogy)
|
| 108 |
type: multiple-choice-qa
|
|
@@ -198,7 +198,7 @@ This model achieves the following results on the relation understanding tasks:
|
|
| 198 |
- Accuracy on U4: 0.5925925925925926
|
| 199 |
- Accuracy on Google: 0.938
|
| 200 |
- Accuracy on ConceptNet Analogy: 0.3775167785234899
|
| 201 |
-
- Accuracy on T-Rex Analogy: 0.
|
| 202 |
- Accuracy on NELL-ONE Analogy: 0.6583333333333333
|
| 203 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-iloob-e-semeval2012/raw/main/classification.json)):
|
| 204 |
- Micro F1 score on BLESS: 0.9219526894681332
|
|
|
|
| 102 |
metrics:
|
| 103 |
- name: Accuracy
|
| 104 |
type: accuracy
|
| 105 |
+
value: 0.5956284153005464
|
| 106 |
- task:
|
| 107 |
name: Analogy Questions (NELL-ONE Analogy)
|
| 108 |
type: multiple-choice-qa
|
|
|
|
| 198 |
- Accuracy on U4: 0.5925925925925926
|
| 199 |
- Accuracy on Google: 0.938
|
| 200 |
- Accuracy on ConceptNet Analogy: 0.3775167785234899
|
| 201 |
+
- Accuracy on T-Rex Analogy: 0.5956284153005464
|
| 202 |
- Accuracy on NELL-ONE Analogy: 0.6583333333333333
|
| 203 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-iloob-e-semeval2012/raw/main/classification.json)):
|
| 204 |
- Micro F1 score on BLESS: 0.9219526894681332
|