File size: 3,307 Bytes
b2ff0d4
d1c0da6
 
 
 
 
 
 
 
 
 
 
 
 
 
b2ff0d4
d1c0da6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96d25cf
d1c0da6
 
 
0127269
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78c79d9
 
0127269
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1c0da6
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
language:
- en
license: apache-2.0
task_categories:
- text-generation
- fill-mask
tags:
- code
- rust
- hyperswitch
- repo-specific-finetuning
pretty_name: hyperswitch Code Corpus (Track A Split)
size_categories:
- n<1K
---

# archit11/hyperswitch-code-corpus-track-a

Repository-specific code corpus extracted from `hyperswitch` and split by file for training/evaluation.

## What is in this dataset

- Source corpus: `data/code_corpus_hyperswitch`
- Total files: 300
- Train files: 270
- Validation files: 30
- Test files: 0
- File type filter: .rs
- Split mode: `file` (file-level holdout)

Each row has:

- `file_name`: flattened source file name
- `text`: full file contents

## Training context

This dataset was used for extended pretraining of:

- Model repo: `https://huggingface.co/archit11/qwen2.5-coder-3b-hyperswitch-track-a-lora`
- Base model: `/root/.cache/huggingface/hub/models--Qwen--Qwen2.5-Coder-3B/snapshots/09d9bc5d376b0cfa0100a0694ea7de7232525803`
- Sequence curriculum: [768, 1024, 1536]
- Learning rate: 0.001
- Batch size: 1

Evaluation from this run:
( from held out dataset )
- Baseline perplexity: 2.2832
- Post-training perplexity: 1.5429

  Filtering

  - Source repo restricted to crates/ Rust files only (.rs) in
    data_preparation.py:48 and data_preparation.py:44.
  - Hard path exclusions for noisy dirs like tests, docs, examples, migrations,
    scripts, etc. in data_preparation.py:49.
  - Dropped empty/generated files (generated by, auto-generated, do not edit, etc.)
    in data_preparation.py:97 and data_preparation.py:149.
  - Kept files only if line count in [25, 4000] (data_preparation.py:45,
    data_preparation.py:46, data_preparation.py:195).
  - Kept only structurally rich files (functions + types >= 2) in
    data_preparation.py:205.
  - Ranked by a quality score and kept top 300 files (data_preparation.py:47,
    data_preparation.py:209, data_preparation.py:229).
  - Actual corpus stats: 300 files, 370,212 lines in data/
    corpus_metadata_hyperswitch.json.

  Split

  - For this run (results/track_a_hyperswitch_metrics_lr1e3_curr.json): 270 train
    files, 30 validation files, effectively no test set recorded.
  - Current script does file split after random.shuffle(all_files)
    (track_a_pretraining.py:361, track_a_pretraining.py:377).

  Chunking
  
  - no ast based chuking yet since the compute constrains and would be hard to make it work since sequence len is limited
  - Files are concatenated per split with a // FILE: <name> header
    (track_a_pretraining.py:157).
  - Tokenization uses add_special_tokens=False; chunks are fixed-size, non-
    overlapping windows (stride = block size) in track_a_pretraining.py:176.
  - Curriculum for this run: 768 -> 1024 -> 1536 (results/
    track_a_hyperswitch_metrics_lr1e3_curr.json).
  - Validation chunks were capped to 160 (seen in run metrics), via random subset
    trimming logic in track_a_pretraining.py:196.

  Perplexity eval

  - PPL is computed from average token-level CE loss over eval chunks
    (track_a_pretraining.py:267).
  - This run reported 2.2832 -> 1.5429 (baseline -> post).

## Load with datasets

```python
from datasets import load_dataset

ds = load_dataset("archit11/hyperswitch-code-corpus-track-a")
print(ds)
print(ds["train"][0]["file_name"])
```