Update README.md
Browse files
README.md
CHANGED
|
@@ -184,14 +184,14 @@ So it makes sense to evaluate our models in retrieval slice of the MTEB benchmar
|
|
| 184 |
##### Long Document Retrieval
|
| 185 |
|
| 186 |
<center>
|
| 187 |
-
<img src="./ar_metrics_4.png" width=
|
| 188 |
<b><p>Table 3: Detailed Arabic retrieval performance on the MultiLongDoc dev set (measured by nDCG@10)</p></b>
|
| 189 |
</center>
|
| 190 |
|
| 191 |
|
| 192 |
##### X-lingual Retrieval
|
| 193 |
|
| 194 |
-
Almost all models below are monolingual arabic models
|
| 195 |
|
| 196 |
<center>
|
| 197 |
<img src="./ar_metrics_5.png" width=80%/>
|
|
|
|
| 184 |
##### Long Document Retrieval
|
| 185 |
|
| 186 |
<center>
|
| 187 |
+
<img src="./ar_metrics_4.png" width=150%/>
|
| 188 |
<b><p>Table 3: Detailed Arabic retrieval performance on the MultiLongDoc dev set (measured by nDCG@10)</p></b>
|
| 189 |
</center>
|
| 190 |
|
| 191 |
|
| 192 |
##### X-lingual Retrieval
|
| 193 |
|
| 194 |
+
Almost all models below are monolingual arabic models so they have no notion of any other languages. But the below table shows how our model excels in cross-lingual scenarios.
|
| 195 |
|
| 196 |
<center>
|
| 197 |
<img src="./ar_metrics_5.png" width=80%/>
|