Zongxia Li
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -40,7 +40,7 @@ print("Exact Match: ", match_result)
|
|
| 40 |
```
|
| 41 |
|
| 42 |
#### Transformer Match
|
| 43 |
-
Our fine-tuned BERT model is this repository. Our Package also supports downloading and matching directly.
|
| 44 |
|
| 45 |
```python
|
| 46 |
from qa_metrics.transformerMatcher import TransformerMatcher
|
|
@@ -49,7 +49,7 @@ question = "who will take the throne after the queen dies"
|
|
| 49 |
tm = TransformerMatcher("distilroberta")
|
| 50 |
scores = tm.get_scores(reference_answer, candidate_answer, question)
|
| 51 |
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
|
| 52 |
-
print("Score: %s;
|
| 53 |
```
|
| 54 |
|
| 55 |
#### F1 Score
|
|
@@ -71,10 +71,10 @@ question = "who will take the throne after the queen dies"
|
|
| 71 |
cfm = CFMatcher()
|
| 72 |
scores = cfm.get_scores(reference_answer, candidate_answer, question)
|
| 73 |
match_result = cfm.cf_match(reference_answer, candidate_answer, question)
|
| 74 |
-
print("Score: %s;
|
| 75 |
```
|
| 76 |
|
| 77 |
-
If you find this repo avialable, please cite:
|
| 78 |
```bibtex
|
| 79 |
@misc{li2024cfmatch,
|
| 80 |
title={CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering},
|
|
@@ -86,10 +86,11 @@ If you find this repo avialable, please cite:
|
|
| 86 |
}
|
| 87 |
```
|
| 88 |
|
|
|
|
| 89 |
## Updates
|
| 90 |
- [01/24/24] 🔥 The full paper is uploaded and can be accessed [here]([https://arxiv.org/abs/2310.14566](https://arxiv.org/abs/2401.13170)). The dataset is expanded and leaderboard is updated.
|
| 91 |
- Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper.
|
| 92 |
-
- Now our model supports
|
| 93 |
|
| 94 |
## License
|
| 95 |
|
|
|
|
| 40 |
```
|
| 41 |
|
| 42 |
#### Transformer Match
|
| 43 |
+
Our fine-tuned BERT model is this repository. Our Package also supports downloading and matching directly. distilroberta, distilbert, and roberta are also supported now! 🔥🔥🔥
|
| 44 |
|
| 45 |
```python
|
| 46 |
from qa_metrics.transformerMatcher import TransformerMatcher
|
|
|
|
| 49 |
tm = TransformerMatcher("distilroberta")
|
| 50 |
scores = tm.get_scores(reference_answer, candidate_answer, question)
|
| 51 |
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
|
| 52 |
+
print("Score: %s; CF Match: %s" % (scores, match_result))
|
| 53 |
```
|
| 54 |
|
| 55 |
#### F1 Score
|
|
|
|
| 71 |
cfm = CFMatcher()
|
| 72 |
scores = cfm.get_scores(reference_answer, candidate_answer, question)
|
| 73 |
match_result = cfm.cf_match(reference_answer, candidate_answer, question)
|
| 74 |
+
print("Score: %s; bert Match: %s" % (scores, match_result))
|
| 75 |
```
|
| 76 |
|
| 77 |
+
If you find this repo avialable, please cite our paper:
|
| 78 |
```bibtex
|
| 79 |
@misc{li2024cfmatch,
|
| 80 |
title={CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering},
|
|
|
|
| 86 |
}
|
| 87 |
```
|
| 88 |
|
| 89 |
+
|
| 90 |
## Updates
|
| 91 |
- [01/24/24] 🔥 The full paper is uploaded and can be accessed [here]([https://arxiv.org/abs/2310.14566](https://arxiv.org/abs/2401.13170)). The dataset is expanded and leaderboard is updated.
|
| 92 |
- Our Training Dataset is adapted and augmented from [Bulian et al](https://github.com/google-research-datasets/answer-equivalence-dataset). Our [dataset repo](https://github.com/zli12321/Answer_Equivalence_Dataset.git) includes the augmented training set and QA evaluation testing sets discussed in our paper.
|
| 93 |
+
- Now our model supports [distilroberta](https://huggingface.co/Zongxia/answer_equivalence_distilroberta), [distilbert](https://huggingface.co/Zongxia/answer_equivalence_distilbert), a smaller and more robust matching model than Bert!
|
| 94 |
|
| 95 |
## License
|
| 96 |
|