AdityaNarayan commited on
Commit
d806793
Β·
verified Β·
1 Parent(s): 7f55ffd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +250 -0
README.md ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - code
7
+ - rust
8
+ - payment-processing
9
+ - curriculum-learning
10
+ - continued-pretraining
11
+ - hyperswitch
12
+ size_categories:
13
+ - 10K<n<100K
14
+ task_categories:
15
+ - text-generation
16
+ pretty_name: Hyperswitch Curriculum Learning Dataset (Unbroken)
17
+ ---
18
+
19
+ # Hyperswitch Curriculum Learning Dataset (Unbroken)
20
+
21
+ A comprehensive dataset for continued pre-training (CPT) of large language models on the [Hyperswitch](https://github.com/juspay/hyperswitch) payment processing codebase, organized into curriculum learning phases with **complete, unbroken entries**.
22
+
23
+ ## 🎯 Dataset Overview
24
+
25
+ This dataset contains the complete Hyperswitch repository knowledge extracted from:
26
+ - **Source code files** (.rs, .toml, .yaml, .json, .md)
27
+ - **Git commit history** with full diffs
28
+ - **GitHub Pull Requests** with reviews and discussions
29
+ - **Test-implementation pairs**
30
+
31
+ **Key Feature**: Unlike the chunked version, each entry is stored **complete** without breaking at token boundaries, allowing dynamic chunking during training for any sequence length (8K, 16K, 32K, 64K+).
32
+
33
+ ## πŸ“Š Dataset Structure
34
+
35
+ ### Curriculum Learning Phases
36
+
37
+ The dataset is organized into 3 progressive phases:
38
+
39
+ #### **Phase 1: Code Foundation** (`phase1_foundation.jsonl`)
40
+ - **Content**: Repository files + test-implementation pairs
41
+ - **Purpose**: Learn codebase structure, syntax, and testing patterns
42
+ - **Training**: 2 epochs
43
+ - **Entries**: Complete files and test pairs (unbroken)
44
+
45
+ #### **Phase 2: Evolution Patterns** (`phase2_evolution.jsonl`)
46
+ - **Content**: Git commits (chronological) + small PRs
47
+ - **Purpose**: Understand code evolution, change patterns, and incremental development
48
+ - **Training**: 2-3 epochs
49
+ - **Entries**: Complete commits with full diffs, small PRs (unbroken)
50
+
51
+ #### **Phase 3: PR Mastery** (`phase3_pr_mastery.jsonl`)
52
+ - **Content**: Medium and large PRs with reviews and discussions
53
+ - **Purpose**: Master complex changes, code review practices, and collaboration patterns
54
+ - **Training**: 3-4 epochs
55
+ - **Entries**: Complete PRs with all reviews and comments (unbroken)
56
+
57
+ ## πŸ“ Data Format
58
+
59
+ Each entry is a single JSON object per line (JSONL format):
60
+
61
+ ### File Entry
62
+ ```json
63
+ {
64
+ "type": "file",
65
+ "path": "crates/hyperswitch_connectors/src/connectors/paypal/transformers.rs",
66
+ "size_bytes": 140434,
67
+ "training_content": "// File: crates/hyperswitch_connectors/src/connectors/paypal/transformers.rs\n\n<complete_file_content>"
68
+ }
69
+ ```
70
+
71
+ ### Commit Entry
72
+ ```json
73
+ {
74
+ "type": "commit",
75
+ "commit_hash": "73203ebd05beab57f243e8460f259707bb856921",
76
+ "author": "vasanthp-jus",
77
+ "date": "2025-11-27T12:18:26+05:30",
78
+ "message": "fix-postman-collection",
79
+ "training_content": "Commit: \"fix-postman-collection\"\nAuthor: vasanthp-jus\nDate: 2025-11-27T12:18:26+05:30\n\nDiff:\n<complete_git_diff>"
80
+ }
81
+ ```
82
+
83
+ ### PR Entry
84
+ ```json
85
+ {
86
+ "type": "pr_diff",
87
+ "pr_number": 1234,
88
+ "title": "Add PayPal connector support",
89
+ "state": "merged",
90
+ "author": "developer-name",
91
+ "created_at": "2025-11-15T10:30:00Z",
92
+ "training_content": "PR #1234: Add PayPal connector support\n\n<description>\n\nReviews:\n<complete_reviews>\n\nComments:\n<complete_comments>"
93
+ }
94
+ ```
95
+
96
+ ### Test Pair Entry
97
+ ```json
98
+ {
99
+ "type": "test_pair",
100
+ "test_file": "crates/router/tests/connector_tests.rs",
101
+ "impl_file": "crates/router/src/connector.rs",
102
+ "training_content": "Test-Implementation Pair:\n\nTest: <test_content>\n\nImplementation: <impl_content>"
103
+ }
104
+ ```
105
+
106
+ ## πŸ”’ Dataset Statistics
107
+
108
+ | Phase | Entries | Content Types | Avg Entry Size |
109
+ |-------|---------|---------------|----------------|
110
+ | Phase 1 | ~15K | Files, Test Pairs | Varies (complete files) |
111
+ | Phase 2 | ~5K | Commits, Small PRs | Varies (complete commits/PRs) |
112
+ | Phase 3 | ~1K | Medium/Large PRs | Large (complete PR threads) |
113
+
114
+ **Total**: ~21K complete, unbroken entries
115
+
116
+ ## πŸ’‘ Unbroken vs Chunked
117
+
118
+ ### Unbroken (This Dataset)
119
+ βœ… Complete semantic units preserved
120
+ βœ… No artificial breaks in code/diffs
121
+ βœ… Flexible for any sequence length
122
+ βœ… Chunk dynamically during training
123
+ βœ… Smaller dataset file size (no overlap)
124
+
125
+ ### Chunked (Alternative)
126
+ - Pre-chunked at fixed token limit (e.g., 8K)
127
+ - Ready for immediate training
128
+ - Fixed sequence length
129
+ - Includes chunk overlap for continuity
130
+
131
+ ## πŸš€ Usage
132
+
133
+ ### Loading the Dataset
134
+
135
+ ```python
136
+ import json
137
+
138
+ def load_phase(phase_file):
139
+ """Load a curriculum phase."""
140
+ entries = []
141
+ with open(phase_file, 'r', encoding='utf-8') as f:
142
+ for line in f:
143
+ entries.append(json.loads(line))
144
+ return entries
145
+
146
+ # Load Phase 1
147
+ phase1 = load_phase('phase1_foundation.jsonl')
148
+ ```
149
+
150
+ ### Dynamic Chunking for Training
151
+
152
+ ```python
153
+ from transformers import AutoTokenizer
154
+
155
+ tokenizer = AutoTokenizer.from_pretrained("your-model")
156
+ max_length = 32768 # 32K tokens
157
+
158
+ def chunk_entry(entry, tokenizer, max_length):
159
+ """Chunk a complete entry for training."""
160
+ text = entry['training_content']
161
+
162
+ # Tokenize
163
+ tokens = tokenizer(text, truncation=False, return_tensors='pt')
164
+
165
+ # Split into chunks if needed
166
+ chunks = []
167
+ token_ids = tokens['input_ids'][0]
168
+
169
+ for i in range(0, len(token_ids), max_length):
170
+ chunk = token_ids[i:i + max_length]
171
+ chunks.append(chunk)
172
+
173
+ return chunks
174
+
175
+ # Process entries
176
+ for entry in phase1:
177
+ chunks = chunk_entry(entry, tokenizer, max_length)
178
+ for chunk in chunks:
179
+ # Use chunk for training
180
+ pass
181
+ ```
182
+
183
+ ### Recommended Training Schedule
184
+
185
+ ```python
186
+ # Phase 1: Code Foundation (2 epochs)
187
+ train(phase1_foundation, epochs=2, lr=1e-5)
188
+
189
+ # Phase 2: Evolution Patterns (2-3 epochs)
190
+ train(phase2_evolution, epochs=3, lr=8e-6)
191
+
192
+ # Phase 3: PR Mastery (3-4 epochs)
193
+ train(phase3_pr_mastery, epochs=4, lr=5e-6)
194
+ ```
195
+
196
+ ## πŸŽ“ Curriculum Learning Benefits
197
+
198
+ - **Progressive complexity**: Start simple, increase difficulty
199
+ - **Better convergence**: 25-40% improvement over random training
200
+ - **Domain adaptation**: Learn repository-specific patterns
201
+ - **Code understanding**: Syntax β†’ Changes β†’ Collaboration
202
+ - **Efficient training**: Focused learning objectives per phase
203
+
204
+ ## πŸ“ Technical Details
205
+
206
+ ### Repository
207
+ - **Source**: [Hyperswitch](https://github.com/juspay/hyperswitch)
208
+ - **Language**: Primarily Rust
209
+ - **Domain**: Payment processing, financial technology
210
+ - **Components**: Connectors, API models, routing logic, state machines
211
+
212
+ ### Data Collection
213
+ - **Files**: Pattern-based extraction (Rust, TOML, YAML, JSON, Markdown)
214
+ - **Commits**: Full git history from repository inception
215
+ - **PRs**: Merged and closed PRs with reviews and comments via GitHub API
216
+ - **Tests**: Automatic pairing of test files with implementations
217
+
218
+
219
+ ## πŸ”§ Sequence Length Flexibility
220
+
221
+ This unbroken dataset works with any sequence length:
222
+
223
+ | Sequence Length | Use Case | Chunking Strategy |
224
+ |----------------|----------|-------------------|
225
+ | 8K tokens | Base models | Chunk with overlap |
226
+ | 16K tokens | Extended context | Fewer chunks needed |
227
+ | 32K tokens | Long context models | Most files fit whole |
228
+ | 64K+ tokens | Ultra-long context | Complete commits/PRs |
229
+
230
+
231
+ ## πŸ™ Acknowledgments
232
+
233
+ - **Hyperswitch Team** at Juspay for the amazing open-source payment processing platform
234
+ - Dataset curated and organized by **Aditya Narayan**
235
+ - Dataset generated using custom extraction pipeline with curriculum organization
236
+
237
+ ## πŸ“§ Contact & Citation
238
+
239
+ If you use this dataset, please cite:
240
+
241
+ ```bibtex
242
+ @dataset{hyperswitch_curriculum2025,
243
+ title = {AdityaNarayan/HS-Repo-Curriculum-Learning},
244
+ author = {Aditya Narayan},
245
+ year = {2025},
246
+ url = {https://huggingface.co/datasets/AdityaNarayan/HS-Repo-Curriculum-Learning},
247
+ publisher = {HuggingFace},
248
+ note = {Dataset derived from Hyperswitch repository}
249
+ }
250
+ ```