Zongxia Li
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ pipeline_tag: text-classification
|
|
| 14 |
[](https://pypi.org/project/qa-metrics/)
|
| 15 |
|
| 16 |
|
| 17 |
-
QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models. It provides various basic metrics to assess the performance of QA models. Check out our paper [**
|
| 18 |
|
| 19 |
|
| 20 |
## Installation
|
|
|
|
| 14 |
[](https://pypi.org/project/qa-metrics/)
|
| 15 |
|
| 16 |
|
| 17 |
+
QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models. It provides various basic metrics to assess the performance of QA models. Check out our paper [**PANDA**](https://arxiv.org/abs/2402.11161), a matching method going beyond token-level matching and is more efficient than LLM matchings but still retains competitive evaluation performance of transformer LLM models.
|
| 18 |
|
| 19 |
|
| 20 |
## Installation
|