hotchpotch commited on
Commit
2fc483e
·
verified ·
1 Parent(s): f120750

Unify README structure and add Data structure bm25 details

Browse files
Files changed (1) hide show
  1. README.md +26 -49
README.md CHANGED
@@ -167,16 +167,25 @@ configs:
167
  path: queries/NanoCodeSearchNetRuby-*
168
  ---
169
 
170
-
171
  # NanoCodeSearchNet (with bm25 subset)
172
 
173
- A tiny, evaluation-ready slice of [CodeSearchNet](https://huggingface.co/datasets/code-search-net/code_search_net) (test set) that mirrors the spirit of [NanoBEIR](https://huggingface.co/collections/zeta-alpha-ai/nanobeir): same task, same style, but dramatically smaller so you can iterate and benchmark in minutes instead of hours.
 
 
174
 
175
- Evaluation can be performed during and after training by integrating with Sentence Transformer's Evaluation module (InformationRetrievalEvaluator).
 
 
176
 
177
- ## Performance Comparison Across Models
178
 
179
- NanoCodeSearchNet Evaluation (NDCG@10)
 
 
 
 
 
 
180
 
181
  A `bm25` subset is included for first-stage retrieval and reranking experiments.
182
 
@@ -186,57 +195,25 @@ A `bm25` subset is included for first-stage retrieval and reranking experiments.
186
  | e5-small | 0.7268 | 0.6805 | 0.7884 | 0.6386 | 0.6603 | 0.9142 | 0.6788 |
187
  | e5-large | 0.7836 | 0.7595 | 0.8287 | 0.6942 | 0.7291 | 0.9513 | 0.7391 |
188
  | bge-m3 | 0.7107 | 0.6643 | 0.6934 | 0.6138 | 0.6229 | 0.9852 | 0.6844 |
189
-
190
-
191
- ## What this dataset is
192
-
193
- - A collection of 6 programming-language subsets (`corpus`, `queries`, `qrels`) published on the Hugging Face Hub under `hotchpotch/NanoCodeSearchNet`.
194
- - Each subset contains **50 test queries** and a **corpus of up to 10,000 code snippets**.
195
- - Queries are function docstrings, and positives are the corresponding function bodies from the same source row.
196
- - Query IDs are `q-<docid>`, where `docid` is the `func_code_url` when available.
197
- - Built from the CodeSearchNet `test` split (`refs/convert/parquet`) with deterministic sampling (seed=42).
198
- - License: **Other** (see CodeSearchNet and upstream repository licenses).
199
 
200
  ## Subset names
201
 
202
- - Split names:
203
- - `NanoCodeSearchNetGo`
204
- - `NanoCodeSearchNetJava`
205
- - `NanoCodeSearchNetJavaScript`
206
- - `NanoCodeSearchNetPHP`
207
- - `NanoCodeSearchNetPython`
208
- - `NanoCodeSearchNetRuby`
209
- - Config names: `corpus`, `queries`, `qrels`
210
-
211
- ## Usage
212
-
213
- ```python
214
- from datasets import load_dataset
215
-
216
- split = "NanoCodeSearchNetPython"
217
- queries = load_dataset("hotchpotch/NanoCodeSearchNet", "queries", split=split)
218
- corpus = load_dataset("hotchpotch/NanoCodeSearchNet", "corpus", split=split)
219
- qrels = load_dataset("hotchpotch/NanoCodeSearchNet", "qrels", split=split)
220
 
221
- print(queries[0]["text"])
222
- ```
223
 
224
- ## Why Nano?
 
 
 
 
225
 
226
- - **Fast eval loops**: 50 queries × 10k docs fits comfortably on a single GPU/CPU run.
227
- - **Reproducible**: deterministic sampling and stable IDs.
228
- - **Drop-in**: BEIR/NanoBEIR-style schemas, so existing IR loaders need minimal tweaks.
229
 
230
- ### Upstream sources
231
-
232
- - Original data: **CodeSearchNet** — [CodeSearchNet Challenge: Evaluating the State of Semantic Code Search: 1909.09436](https://huggingface.co/papers/1909.09436).
233
- - Base dataset: [code-search-net/code_search_net](https://huggingface.co/datasets/code-search-net/code_search_net) (Hugging Face Hub).
234
- - Inspiration: **NanoBEIR** (lightweight evaluation subsets).
235
 
236
  ## License
237
 
238
- Other. This dataset is derived from CodeSearchNet and ultimately from open-source GitHub repositories. Please respect original repository licenses and attribution requirements.
239
-
240
- ## Author
241
-
242
- - Yuichi Tateno
 
167
  path: queries/NanoCodeSearchNetRuby-*
168
  ---
169
 
 
170
  # NanoCodeSearchNet (with bm25 subset)
171
 
172
+ A lightweight, evaluation-ready subset of [CodeSearchNet](https://huggingface.co/datasets/code-search-net/code_search_net), designed for fast code-retrieval benchmarking.
173
+
174
+ ## What this dataset is
175
 
176
+ - A code-retrieval benchmark with 6 programming-language splits: `Go`, `Java`, `JavaScript`, `PHP`, `Python`, `Ruby`.
177
+ - Each split contains 50 queries and up to 10,000 code snippets.
178
+ - Queries are function docstrings and positives are corresponding function bodies.
179
 
180
+ ## Data structure
181
 
182
+ - `corpus`: `_id`, `text`
183
+ - `queries`: `_id`, `text`
184
+ - `qrels`: `query-id`, `corpus-id`, `score`
185
+ - `bm25`: `query-id`, `corpus-ids`
186
+ - `bm25`: A bm25 subset is included for first-stage retrieval and reranking experiments.
187
+
188
+ ## Performance Comparison Across Models
189
 
190
  A `bm25` subset is included for first-stage retrieval and reranking experiments.
191
 
 
195
  | e5-small | 0.7268 | 0.6805 | 0.7884 | 0.6386 | 0.6603 | 0.9142 | 0.6788 |
196
  | e5-large | 0.7836 | 0.7595 | 0.8287 | 0.6942 | 0.7291 | 0.9513 | 0.7391 |
197
  | bge-m3 | 0.7107 | 0.6643 | 0.6934 | 0.6138 | 0.6229 | 0.9852 | 0.6844 |
 
 
 
 
 
 
 
 
 
 
198
 
199
  ## Subset names
200
 
201
+ - Split names: `NanoCodeSearchNetGo`, `NanoCodeSearchNetJava`, `NanoCodeSearchNetJavaScript`, `NanoCodeSearchNetPHP`, `NanoCodeSearchNetPython`, `NanoCodeSearchNetRuby`
202
+ - Config names: `corpus`, `queries`, `qrels`, `bm25`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
203
 
204
+ ## BM25 tokenization strategy
 
205
 
206
+ - `transformer`: `Qwen/Qwen3-0.6B` tokenizer
207
+ - `word_seg`: language-specific word segmentation for `ja`, `zh`, `th`, `ko`
208
+ - `stemmer`: `PyStemmer`
209
+ - `whitespace`: simple whitespace split (`str.split()`)
210
+ - Auto selection chooses the best strategy per split/language using retrieval metrics.
211
 
212
+ ## Upstream source
 
 
213
 
214
+ - Original dataset: [code-search-net/code_search_net](https://huggingface.co/datasets/code-search-net/code_search_net)
215
+ - Paper: [CodeSearchNet Challenge (1909.09436)](https://huggingface.co/papers/1909.09436)
 
 
 
216
 
217
  ## License
218
 
219
+ Other. This dataset is derived from CodeSearchNet and follows upstream repository licensing and attribution requirements.