Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
License:
File size: 8,323 Bytes
df96913
7166a27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
df96913
6a07ded
 
 
 
df96913
 
 
 
ea1ed6a
 
 
 
f7e432b
 
 
 
76c7189
 
 
 
6ec9964
 
 
 
d54686c
 
 
 
38816d4
 
 
 
118d4a2
 
 
 
f0743af
 
 
 
9742143
 
 
 
e1d3bb2
 
 
 
54840da
df96913
5b866df
1b1e352
514b94c
1b1e352
 
 
 
 
514b94c
1b1e352
0240b16
1b1e352
0240b16
1b1e352
514b94c
1b1e352
514b94c
1b1e352
 
 
514b94c
 
 
1b1e352
0240b16
1b1e352
514b94c
 
0240b16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b1e352
514b94c
1b1e352
 
 
 
5b866df
 
1b1e352
 
 
 
 
 
 
 
 
514b94c
1b1e352
 
 
5b866df
514b94c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0240b16
 
 
514b94c
 
 
 
0240b16
 
 
 
 
514b94c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
---
dataset_info:
- config_name: go-v1-hard-negatives-100k
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  - name: negs
    list: string
  splits:
  - name: train
    num_examples: 87647
- config_name: go-v1-pair-2M
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  splits:
  - name: train
    num_examples: 1541111
- config_name: java-v1-hard-negatives-100k
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  - name: negs
    list: string
  splits:
  - name: train
    num_examples: 81657
- config_name: java-v1-pair-2M
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  splits:
  - name: train
    num_examples: 1491655
- config_name: javascript-v1-hard-negatives-100k
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  - name: negs
    list: string
  splits:
  - name: train
    num_examples: 79684
- config_name: javascript-v1-pair-2M
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  splits:
  - name: train
    num_examples: 1310965
- config_name: php-v1-hard-negatives-100k
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  - name: negs
    list: string
  splits:
  - name: train
    num_examples: 75632
- config_name: php-v1-pair-2M
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  splits:
  - name: train
    num_examples: 1343442
- config_name: python-v1-hard-negatives-100k
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  - name: negs
    list: string
  splits:
  - name: train
    num_examples: 97147
- config_name: python-v1-pair-2M
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  splits:
  - name: train
    num_examples: 1807480
- config_name: ruby-v1-hard-negatives-100k
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  - name: negs
    list: string
  splits:
  - name: train
    num_examples: 68382
- config_name: ruby-v1-pair-2M
  features:
  - name: query
    dtype: string
  - name: pos
    dtype: string
  splits:
  - name: train
    num_examples: 1175219
configs:
- config_name: go-v1-hard-negatives-100k
  data_files:
  - split: train
    path: go-v1-hard-negatives-100k/train-*
- config_name: go-v1-pair-2M
  data_files:
  - split: train
    path: go-v1-pair-2M/train-*
- config_name: java-v1-hard-negatives-100k
  data_files:
  - split: train
    path: java-v1-hard-negatives-100k/train-*
- config_name: java-v1-pair-2M
  data_files:
  - split: train
    path: java-v1-pair-2M/train-*
- config_name: javascript-v1-hard-negatives-100k
  data_files:
  - split: train
    path: javascript-v1-hard-negatives-100k/train-*
- config_name: javascript-v1-pair-2M
  data_files:
  - split: train
    path: javascript-v1-pair-2M/train-*
- config_name: php-v1-hard-negatives-100k
  data_files:
  - split: train
    path: php-v1-hard-negatives-100k/train-*
- config_name: php-v1-pair-2M
  data_files:
  - split: train
    path: php-v1-pair-2M/train-*
- config_name: python-v1-hard-negatives-100k
  data_files:
  - split: train
    path: python-v1-hard-negatives-100k/train-*
- config_name: python-v1-pair-2M
  data_files:
  - split: train
    path: python-v1-pair-2M/train-*
- config_name: ruby-v1-hard-negatives-100k
  data_files:
  - split: train
    path: ruby-v1-hard-negatives-100k/train-*
- config_name: ruby-v1-pair-2M
  data_files:
  - split: train
    path: ruby-v1-pair-2M/train-*
license: apache-2.0
---
# cornstack-samples

Filtered CoRNStack sample subsets for code retrieval training.

Source dataset and paper:
- CoRNStack collection: https://huggingface.co/collections/nomic-ai/cornstack
- CoRNStack paper: https://huggingface.co/papers/2412.01007

## What This Release Contains

This release keeps the original subset layout (6 languages x pair + hard-negatives) and applies deterministic rule-based filtering.

In this revision, query-level deduplication is also applied per subset: if `query` is duplicated, only the first row is kept.

## Config Layout And Schema

Each language is published as two configs with split `train`:
- `{lang}-v1-pair-2M`
- `{lang}-v1-hard-negatives-100k`

Schema:
- Pair configs: `query`, `pos`
- Hard-negative configs: `query`, `pos`, `negs` (list[string])

## Subsets And Row Counts (After Filter + Query Dedup)

| Subset (config name) | split | num_examples |
| --- | --- | ---: |
| `go-v1-pair-2M` | `train` | 1,541,111 |
| `go-v1-hard-negatives-100k` | `train` | 87,647 |
| `java-v1-pair-2M` | `train` | 1,491,655 |
| `java-v1-hard-negatives-100k` | `train` | 81,657 |
| `javascript-v1-pair-2M` | `train` | 1,310,965 |
| `javascript-v1-hard-negatives-100k` | `train` | 79,684 |
| `php-v1-pair-2M` | `train` | 1,343,442 |
| `php-v1-hard-negatives-100k` | `train` | 75,632 |
| `python-v1-pair-2M` | `train` | 1,807,480 |
| `python-v1-hard-negatives-100k` | `train` | 97,147 |
| `ruby-v1-pair-2M` | `train` | 1,175,219 |
| `ruby-v1-hard-negatives-100k` | `train` | 68,382 |

Total rows:
- Pair: 8,669,872
- Hard-negatives: 490,149
- Overall: 9,160,021

## Filter Impact (Query Dedup Stage)

The table below shows only the query-dedup impact on top of the previous rule-based filter.

| Subset | before | after | removed | removed_ratio |
| --- | ---: | ---: | ---: | ---: |
| `go-v1-pair-2M` | 1,992,985 | 1,541,111 | 451,874 | 22.67% |
| `go-v1-hard-negatives-100k` | 99,663 | 87,647 | 12,016 | 12.06% |
| `java-v1-pair-2M` | 1,752,593 | 1,491,655 | 260,938 | 14.89% |
| `java-v1-hard-negatives-100k` | 87,504 | 81,657 | 5,847 | 6.68% |
| `javascript-v1-pair-2M` | 1,960,276 | 1,310,965 | 649,311 | 33.12% |
| `javascript-v1-hard-negatives-100k` | 98,025 | 79,684 | 18,341 | 18.71% |
| `php-v1-pair-2M` | 1,710,537 | 1,343,442 | 367,095 | 21.46% |
| `php-v1-hard-negatives-100k` | 85,460 | 75,632 | 9,828 | 11.50% |
| `python-v1-pair-2M` | 1,990,051 | 1,807,480 | 182,571 | 9.17% |
| `python-v1-hard-negatives-100k` | 99,535 | 97,147 | 2,388 | 2.40% |
| `ruby-v1-pair-2M` | 1,583,047 | 1,175,219 | 407,828 | 25.76% |
| `ruby-v1-hard-negatives-100k` | 79,040 | 68,382 | 10,658 | 13.48% |

## Quick Usage

```python
from datasets import load_dataset

pair_ds = load_dataset("hotchpotch/cornstack-samples", "python-v1-pair-2M", split="train")
hard_ds = load_dataset("hotchpotch/cornstack-samples", "python-v1-hard-negatives-100k", split="train")

print(pair_ds.column_names, len(pair_ds))
print(hard_ds.column_names, len(hard_ds))
```

## License

This dataset follows CoRNStack and is released under **Apache-2.0**.

## Citation And Attribution

If you use this dataset, please cite and attribute CoRNStack:
- Paper: https://huggingface.co/papers/2412.01007
- Collection: https://huggingface.co/collections/nomic-ai/cornstack

## Noise Filtering Algorithm (Rule-based)

The following deterministic rules are applied before publishing this release.

1. Prefix-based noisy query removal
A row is dropped if `query` starts with any of the following prefixes:
- `TODO`
- `GET /`
- `POST /`
- `PUT /`
- `DELETE /`
- `Display a listing of the resource.`
- `Store a newly created resource in storage.`
- `Show the form for editing the specified resource.`
- `Update the specified resource in storage.`
- `Show the form for creating a new resource.`
- `Remove the specified resource from storage.`
- `Display the specified resource.`
- `Transform the resource into an array.`
- `Autogenerated method stub`
- `Auto generated`
- `this down() migration is autogenerated`
- `this up() migration is autogenerated`
- `"/ renamed from:"`
- `"/ access modifiers changed from:"`

2. Minimum positive-document length
A row is dropped if the positive side text is shorter than 30 characters.
- Pair configs: `pos` length >= 30 required
- Hard-negative configs: `pos` length >= 30 required

3. Hard-negative validity constraint
For hard-negative configs, at least one valid negative must remain after normalization (`min_negs = 1`).

4. Query-level deduplication
Within each subset split, rows are grouped by exact `query` string.
- Keep the first occurrence
- Drop all later duplicates

This filtering is purely rule-based (no model scoring), targeting high-noise templates and low-information positives while preserving broad retrieval coverage.