Update README.md
Browse files
README.md
CHANGED
|
@@ -125,3 +125,90 @@ configs:
|
|
| 125 |
- split: NanoCodeSearchNetRuby
|
| 126 |
path: queries/NanoCodeSearchNetRuby-*
|
| 127 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
- split: NanoCodeSearchNetRuby
|
| 126 |
path: queries/NanoCodeSearchNetRuby-*
|
| 127 |
---
|
| 128 |
+
|
| 129 |
+
|
| 130 |
+
# NanoCodeSearchNet
|
| 131 |
+
|
| 132 |
+
A tiny, evaluation-ready slice of CodeSearchNet that mirrors the spirit of [NanoBEIR](https://huggingface.co/collections/zeta-alpha-ai/nanobeir): same task, same style, but dramatically smaller so you can iterate and benchmark in minutes instead of hours.
|
| 133 |
+
|
| 134 |
+
Evaluation can be performed during and after training by integrating with Sentence Transformer's Evaluation module (InformationRetrievalEvaluator).
|
| 135 |
+
|
| 136 |
+
## NanoCodeSearchNet Evaluation (NDCG@10)
|
| 137 |
+
|
| 138 |
+
| Model | Avg | Go | Java | JavaScript | PHP | Python | Ruby |
|
| 139 |
+
|---|---:|---:|---:|---:|---:|---:|---:|
|
| 140 |
+
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | **0.7351** | 0.6706 | 0.7899 | 0.6582 | 0.6651 | 0.9258 | 0.7008 |
|
| 141 |
+
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | **0.7769** | 0.7459 | 0.8304 | 0.7016 | 0.7069 | 0.9513 | 0.7251 |
|
| 142 |
+
| [e5-small-v2](https://huggingface.co/intfloat/e5-small-v2) | **0.7371** | 0.7137 | 0.7758 | 0.6126 | 0.6561 | 0.9582 | 0.7060 |
|
| 143 |
+
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | **0.7541** | 0.7097 | 0.8124 | 0.6715 | 0.7065 | 0.9386 | 0.6860 |
|
| 144 |
+
| [bge-m3](https://huggingface.co/BAAI/bge-m3) | **0.7094** | 0.6680 | 0.7050 | 0.6154 | 0.6238 | 0.9779 | 0.6662 |
|
| 145 |
+
| [gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) | **0.8112** | 0.7789 | 0.8666 | 0.7344 | 0.7991 | 0.9652 | 0.7231 |
|
| 146 |
+
| [nomic-embed-text-v2-moe](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) | **0.7824** | 0.7635 | 0.8343 | 0.6519 | 0.7470 | 0.9852 | 0.7122 |
|
| 147 |
+
| [paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | **0.4651** | 0.3978 | 0.4608 | 0.3269 | 0.2183 | 0.9236 | 0.4631 |
|
| 148 |
+
|
| 149 |
+
Notes:
|
| 150 |
+
- The above results were computed with `nano_code_search_net_eval.py`.
|
| 151 |
+
|
| 152 |
+
## What this dataset is
|
| 153 |
+
|
| 154 |
+
- A collection of 6 programming-language subsets (`corpus`, `queries`, `qrels`) published on the Hugging Face Hub under `hotchpotch/NanoCodeSearchNet`.
|
| 155 |
+
- Each subset contains **50 test queries** and a **corpus of up to 10,000 code snippets**.
|
| 156 |
+
- Queries are function docstrings, and positives are the corresponding function bodies from the same source row.
|
| 157 |
+
- Query IDs are `q-<docid>`, where `docid` is the `func_code_url` when available.
|
| 158 |
+
- Built from the CodeSearchNet `test` split (`refs/convert/parquet`) with deterministic sampling (seed=42).
|
| 159 |
+
- License: **Other** (see CodeSearchNet and upstream repository licenses).
|
| 160 |
+
|
| 161 |
+
## Subset names
|
| 162 |
+
|
| 163 |
+
- Split names:
|
| 164 |
+
- `NanoCodeSearchNetGo`
|
| 165 |
+
- `NanoCodeSearchNetJava`
|
| 166 |
+
- `NanoCodeSearchNetJavaScript`
|
| 167 |
+
- `NanoCodeSearchNetPHP`
|
| 168 |
+
- `NanoCodeSearchNetPython`
|
| 169 |
+
- `NanoCodeSearchNetRuby`
|
| 170 |
+
- Config names: `corpus`, `queries`, `qrels`
|
| 171 |
+
|
| 172 |
+
## Usage
|
| 173 |
+
|
| 174 |
+
```python
|
| 175 |
+
from datasets import load_dataset
|
| 176 |
+
|
| 177 |
+
split = "NanoCodeSearchNetPython"
|
| 178 |
+
queries = load_dataset("hotchpotch/NanoCodeSearchNet", "queries", split=split)
|
| 179 |
+
corpus = load_dataset("hotchpotch/NanoCodeSearchNet", "corpus", split=split)
|
| 180 |
+
qrels = load_dataset("hotchpotch/NanoCodeSearchNet", "qrels", split=split)
|
| 181 |
+
|
| 182 |
+
print(queries[0]["text"])
|
| 183 |
+
```
|
| 184 |
+
|
| 185 |
+
### Example eval code
|
| 186 |
+
|
| 187 |
+
```bash
|
| 188 |
+
python ./nano_code_search_net_eval.py \
|
| 189 |
+
--model-path intfloat/multilingual-e5-small \
|
| 190 |
+
--query-prompt "query: " \
|
| 191 |
+
--corpus-prompt "passage: "
|
| 192 |
+
```
|
| 193 |
+
|
| 194 |
+
For models that require `trust_remote_code`, add `--trust-remote-code` (e.g., `BAAI/bge-m3`).
|
| 195 |
+
|
| 196 |
+
## Why Nano?
|
| 197 |
+
|
| 198 |
+
- **Fast eval loops**: 50 queries × 10k docs fits comfortably on a single GPU/CPU run.
|
| 199 |
+
- **Reproducible**: deterministic sampling and stable IDs.
|
| 200 |
+
- **Drop-in**: BEIR/NanoBEIR-style schemas, so existing IR loaders need minimal tweaks.
|
| 201 |
+
|
| 202 |
+
### Upstream sources
|
| 203 |
+
|
| 204 |
+
- Original data: **CodeSearchNet** — *CodeSearchNet Challenge: Evaluating the State of Semantic Code Search* (arXiv:1909.09436).
|
| 205 |
+
- Base dataset: [code-search-net/code_search_net](https://huggingface.co/datasets/code-search-net/code_search_net) (Hugging Face Hub).
|
| 206 |
+
- Inspiration: **NanoBEIR** (lightweight evaluation subsets).
|
| 207 |
+
|
| 208 |
+
## License
|
| 209 |
+
|
| 210 |
+
Other. This dataset is derived from CodeSearchNet and ultimately from open-source GitHub repositories. Please respect original repository licenses and attribution requirements.
|
| 211 |
+
|
| 212 |
+
## Author
|
| 213 |
+
|
| 214 |
+
- Yuichi Tateno
|