hotchpotch commited on
Commit
ea1f642
·
verified ·
1 Parent(s): f63c263

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -25
README.md CHANGED
@@ -133,22 +133,15 @@ A tiny, evaluation-ready slice of [CodeSearchNet](https://huggingface.co/dataset
133
 
134
  Evaluation can be performed during and after training by integrating with Sentence Transformer's Evaluation module (InformationRetrievalEvaluator).
135
 
136
- ## NanoCodeSearchNet Evaluation (NDCG@10)
137
 
138
- | Model | Avg | Go | Java | JavaScript | PHP | Python | Ruby |
 
 
139
  |---|---:|---:|---:|---:|---:|---:|---:|
140
- | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | **0.7351** | 0.6706 | 0.7899 | 0.6582 | 0.6651 | 0.9258 | 0.7008 |
141
- | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | **0.7769** | 0.7459 | 0.8304 | 0.7016 | 0.7069 | 0.9513 | 0.7251 |
142
- | [e5-small-v2](https://huggingface.co/intfloat/e5-small-v2) | **0.7371** | 0.7137 | 0.7758 | 0.6126 | 0.6561 | 0.9582 | 0.7060 |
143
- | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | **0.7541** | 0.7097 | 0.8124 | 0.6715 | 0.7065 | 0.9386 | 0.6860 |
144
- | [bge-m3](https://huggingface.co/BAAI/bge-m3) | **0.7094** | 0.6680 | 0.7050 | 0.6154 | 0.6238 | 0.9779 | 0.6662 |
145
- | [gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) | **0.8112** | 0.7789 | 0.8666 | 0.7344 | 0.7991 | 0.9652 | 0.7231 |
146
- | [nomic-embed-text-v2-moe](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) | **0.7824** | 0.7635 | 0.8343 | 0.6519 | 0.7470 | 0.9852 | 0.7122 |
147
- | [paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | **0.4651** | 0.3978 | 0.4608 | 0.3269 | 0.2183 | 0.9236 | 0.4631 |
148
-
149
- Notes:
150
- - The above results were computed with `nano_code_search_net_eval.py`.
151
- - https://huggingface.co/datasets/hotchpotch/NanoCodeSearchNet/blob/main/nano_code_search_net_eval.py
152
 
153
 
154
  ## What this dataset is
@@ -184,17 +177,6 @@ qrels = load_dataset("hotchpotch/NanoCodeSearchNet", "qrels", split=split)
184
  print(queries[0]["text"])
185
  ```
186
 
187
- ### Example eval code
188
-
189
- ```bash
190
- python ./nano_code_search_net_eval.py \
191
- --model-path intfloat/multilingual-e5-small \
192
- --query-prompt "query: " \
193
- --corpus-prompt "passage: "
194
- ```
195
-
196
- For models that require `trust_remote_code`, add `--trust-remote-code` (e.g., `BAAI/bge-m3`).
197
-
198
  ## Why Nano?
199
 
200
  - **Fast eval loops**: 50 queries × 10k docs fits comfortably on a single GPU/CPU run.
 
133
 
134
  Evaluation can be performed during and after training by integrating with Sentence Transformer's Evaluation module (InformationRetrievalEvaluator).
135
 
136
+ ## Performance Comparison Across Models
137
 
138
+ NanoCodeSearchNet Evaluation (NDCG@10)
139
+
140
+ | Model | avg | Go | Java | JavaScript | PHP | Python | Ruby |
141
  |---|---:|---:|---:|---:|---:|---:|---:|
142
+ | e5-small | 0.7268 | 0.6805 | 0.7884 | 0.6386 | 0.6603 | 0.9142 | 0.6788 |
143
+ | e5-large | 0.7836 | 0.7595 | 0.8287 | 0.6942 | 0.7291 | 0.9513 | 0.7391 |
144
+ | bge-m3 | 0.7107 | 0.6643 | 0.6934 | 0.6138 | 0.6229 | 0.9852 | 0.6844 |
 
 
 
 
 
 
 
 
 
145
 
146
 
147
  ## What this dataset is
 
177
  print(queries[0]["text"])
178
  ```
179
 
 
 
 
 
 
 
 
 
 
 
 
180
  ## Why Nano?
181
 
182
  - **Fast eval loops**: 50 queries × 10k docs fits comfortably on a single GPU/CPU run.