Add model results
Browse files
README.md
CHANGED
|
@@ -140,12 +140,12 @@ All evaluations were conducted using the open-source **[Korean-MTEB-Retrieval-Ev
|
|
| 140 |
| Qwen/Qwen3-Embedding-8B | 8B | 0.6154 | 0.7839 | 0.6701 |
|
| 141 |
| Snowflake/snowflake-arctic-embed-l-v2.0 | 0.5B | 0.5448 | 0.7390 | 0.6006 |
|
| 142 |
| BAAI/bge-m3 | 0.5B | 0.5056 | 0.7483 | 0.5573 |
|
|
|
|
| 143 |
| Octen/Octen-Embedding-0.6B | 0.6B | 0.4683 | 0.7057 | 0.5769 |
|
| 144 |
| Salesforce/SFR-Embedding-Mistral | 7B | 0.4579 | N/A | N/A |
|
| 145 |
| Alibaba-NLP/gte-multilingual-base | 0.3B | 0.4097 | 0.7084 | 0.5746 |
|
| 146 |
| intfloat/multilingual-e5-large-instruct | 0.6B | 0.2384 | 0.7050 | N/A |
|
| 147 |
| jinaai/jina-embeddings-v3 | 0.5B | N/A | 0.7088 | 0.4861 |
|
| 148 |
-
| Qwen/Qwen3-Embedding-0.6B | 0.6B | N/A | 0.7017 | 0.5839 |
|
| 149 |
| openai/text-embedding-3-large | N/A | N/A | 0.6646 | N/A |
|
| 150 |
|
| 151 |
To better interpret the evaluation results above, we briefly describe the characteristics and evaluation intent of each benchmark suite used in this comparison.
|
|
|
|
| 140 |
| Qwen/Qwen3-Embedding-8B | 8B | 0.6154 | 0.7839 | 0.6701 |
|
| 141 |
| Snowflake/snowflake-arctic-embed-l-v2.0 | 0.5B | 0.5448 | 0.7390 | 0.6006 |
|
| 142 |
| BAAI/bge-m3 | 0.5B | 0.5056 | 0.7483 | 0.5573 |
|
| 143 |
+
| Qwen/Qwen3-Embedding-0.6B | 0.6B | 0.4707 | 0.7017 | 0.5839 |
|
| 144 |
| Octen/Octen-Embedding-0.6B | 0.6B | 0.4683 | 0.7057 | 0.5769 |
|
| 145 |
| Salesforce/SFR-Embedding-Mistral | 7B | 0.4579 | N/A | N/A |
|
| 146 |
| Alibaba-NLP/gte-multilingual-base | 0.3B | 0.4097 | 0.7084 | 0.5746 |
|
| 147 |
| intfloat/multilingual-e5-large-instruct | 0.6B | 0.2384 | 0.7050 | N/A |
|
| 148 |
| jinaai/jina-embeddings-v3 | 0.5B | N/A | 0.7088 | 0.4861 |
|
|
|
|
| 149 |
| openai/text-embedding-3-large | N/A | N/A | 0.6646 | N/A |
|
| 150 |
|
| 151 |
To better interpret the evaluation results above, we briefly describe the characteristics and evaluation intent of each benchmark suite used in this comparison.
|