File size: 1,356 Bytes
f268abd
1b65966
 
 
 
 
 
 
 
 
 
 
 
 
 
f268abd
1b65966
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
language:
- en
license: apache-2.0
task_categories:
- text-generation
- fill-mask
tags:
- code
- python
- verl
- repo-specific-finetuning
pretty_name: Verl Code Corpus (File Holdout Split)
size_categories:
- n<1K
---

# archit11/verl-code-corpus-track-a-file-split

Repository-specific code corpus extracted from the `verl` project and split by file for training/evaluation.

## What is in this dataset

- Source corpus: `data/code_corpus_verl`
- Total files: 214
- Train files: 172
- Validation files: 21
- Test files: 21
- File type filter: .py
- Split mode: `file` (file-level holdout)

Each row has:

- `file_name`: flattened source file name
- `text`: full file contents

## Training context

This dataset was used for extended pretraining of:

- Model repo: `https://huggingface.co/archit11/qwen2.5-coder-3b-verl-track-a-lora`
- Base model: `/root/.cache/huggingface/hub/models--Qwen--Qwen2.5-Coder-3B/snapshots/09d9bc5d376b0cfa0100a0694ea7de7232525803`
- Sequence curriculum: [768, 1024]
- Learning rate: 0.0001
- Batch size: 8

Evaluation from this run:

- Baseline perplexity (val/test): 3.1820 / 2.7764
- Post-training perplexity (val/test): 2.7844 / 2.2379

## Load with datasets

```python
from datasets import load_dataset

ds = load_dataset("archit11/verl-code-corpus-track-a-file-split")
print(ds)
print(ds["train"][0]["file_name"])
```