| | --- |
| | language: |
| | - en |
| | license: apache-2.0 |
| | task_categories: |
| | - text-generation |
| | - fill-mask |
| | tags: |
| | - code |
| | - rust |
| | - hyperswitch |
| | - repo-specific-finetuning |
| | pretty_name: hyperswitch Code Corpus (Track A Split) |
| | size_categories: |
| | - n<1K |
| | --- |
| | |
| | # archit11/hyperswitch-code-corpus-track-a |
| |
|
| | Repository-specific code corpus extracted from `hyperswitch` and split by file for training/evaluation. |
| |
|
| | ## What is in this dataset |
| |
|
| | - Source corpus: `data/code_corpus_hyperswitch` |
| | - Total files: 300 |
| | - Train files: 270 |
| | - Validation files: 30 |
| | - Test files: 0 |
| | - File type filter: .rs |
| | - Split mode: `file` (file-level holdout) |
| |
|
| | Each row has: |
| |
|
| | - `file_name`: flattened source file name |
| | - `text`: full file contents |
| |
|
| | ## Training context |
| |
|
| | This dataset was used for extended pretraining of: |
| |
|
| | - Model repo: `https://huggingface.co/archit11/qwen2.5-coder-3b-hyperswitch-track-a-lora` |
| | - Base model: `/root/.cache/huggingface/hub/models--Qwen--Qwen2.5-Coder-3B/snapshots/09d9bc5d376b0cfa0100a0694ea7de7232525803` |
| | - Sequence curriculum: [768, 1024, 1536] |
| | - Learning rate: 0.001 |
| | - Batch size: 1 |
| |
|
| | Evaluation from this run: |
| | ( from held out dataset ) |
| | - Baseline perplexity: 2.2832 |
| | - Post-training perplexity: 1.5429 |
| |
|
| | Filtering |
| |
|
| | - Source repo restricted to crates/ Rust files only (.rs) in |
| | data_preparation.py:48 and data_preparation.py:44. |
| | - Hard path exclusions for noisy dirs like tests, docs, examples, migrations, |
| | scripts, etc. in data_preparation.py:49. |
| | - Dropped empty/generated files (generated by, auto-generated, do not edit, etc.) |
| | in data_preparation.py:97 and data_preparation.py:149. |
| | - Kept files only if line count in [25, 4000] (data_preparation.py:45, |
| | data_preparation.py:46, data_preparation.py:195). |
| | - Kept only structurally rich files (functions + types >= 2) in |
| | data_preparation.py:205. |
| | - Ranked by a quality score and kept top 300 files (data_preparation.py:47, |
| | data_preparation.py:209, data_preparation.py:229). |
| | - Actual corpus stats: 300 files, 370,212 lines in data/ |
| | corpus_metadata_hyperswitch.json. |
| | |
| | Split |
| |
|
| | - For this run (results/track_a_hyperswitch_metrics_lr1e3_curr.json): 270 train |
| | files, 30 validation files, effectively no test set recorded. |
| | - Current script does file split after random.shuffle(all_files) |
| | (track_a_pretraining.py:361, track_a_pretraining.py:377). |
| | |
| | Chunking |
| | |
| | - no ast based chuking yet since the compute constrains and would be hard to make it work since sequence len is limited |
| | - Files are concatenated per split with a // FILE: <name> header |
| | (track_a_pretraining.py:157). |
| | - Tokenization uses add_special_tokens=False; chunks are fixed-size, non- |
| | overlapping windows (stride = block size) in track_a_pretraining.py:176. |
| | - Curriculum for this run: 768 -> 1024 -> 1536 (results/ |
| | track_a_hyperswitch_metrics_lr1e3_curr.json). |
| | - Validation chunks were capped to 160 (seen in run metrics), via random subset |
| | trimming logic in track_a_pretraining.py:196. |
| | |
| | Perplexity eval |
| |
|
| | - PPL is computed from average token-level CE loss over eval chunks |
| | (track_a_pretraining.py:267). |
| | - This run reported 2.2832 -> 1.5429 (baseline -> post). |
| | |
| | ## Load with datasets |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | ds = load_dataset("archit11/hyperswitch-code-corpus-track-a") |
| | print(ds) |
| | print(ds["train"][0]["file_name"]) |
| | ``` |
| |
|