Update README.md
Browse files
README.md
CHANGED
|
@@ -84,6 +84,9 @@ Descriptions of the benchmark datasets used for evaluation are as follows:
|
|
| 84 |
- **XPQARetrieval**
|
| 85 |
A real-world dataset constructed from user queries and relevant product documents in a Korean e-commerce platform.
|
| 86 |
|
|
|
|
|
|
|
|
|
|
| 87 |
#### 7 Datasets of BEIR (English)
|
| 88 |
Our model, **telepix/PIXIE-Rune-Preview**, achieves strong performance on a wide range of tasks, including fact verification, multi-hop question answering, financial QA, and scientific document retrieval, demonstrating competitive generalization across diverse domains.
|
| 89 |
|
|
|
|
| 84 |
- **XPQARetrieval**
|
| 85 |
A real-world dataset constructed from user queries and relevant product documents in a Korean e-commerce platform.
|
| 86 |
|
| 87 |
+
> **Tip:**
|
| 88 |
+
> While many benchmark datasets are available for evaluation, in this project we chose to use only those that contain clean positive documents for each query. Keep in mind that a benchmark dataset is just that—a benchmark. For real-world applications, it is best to construct an evaluation dataset tailored to your specific domain and evaluate embedding models, such as PIXIE, in that environment to determine the most suitable one.
|
| 89 |
+
|
| 90 |
#### 7 Datasets of BEIR (English)
|
| 91 |
Our model, **telepix/PIXIE-Rune-Preview**, achieves strong performance on a wide range of tasks, including fact verification, multi-hop question answering, financial QA, and scientific document retrieval, demonstrating competitive generalization across diverse domains.
|
| 92 |
|