File size: 2,813 Bytes
82a09fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
# Hyperswitch Token-Aware CPT Dataset

This dataset contains **1,076 samples** of Rust code from the [Hyperswitch](https://github.com/juspay/hyperswitch) payment router project, optimized for Continued Pre-Training (CPT) with the **Kwaipilot/KAT-Dev** tokenizer.

## Dataset Statistics

- **Total Samples**: 1,076
- **Total Tokens**: 5,687,255
- **Mean Tokens per Sample**: 5,285
- **Token Range**: 2,001 - 15,609

### Token Distribution

- **< 4k tokens**: 38.1% of samples
- **4k-10k tokens**: 52.0% of samples
- **10k+ tokens**: 9.9% of samples

### Granularity Types

- **file**: 721 samples (single large files)
- **module**: 180 samples (multiple files from same module)
- **combined_files**: 160 samples (small files combined by crate)
- **crate**: 15 samples (entire small crates)

## Top Crates

1. **router** - 371 samples
2. **hyperswitch_connectors** - 336 samples
3. **analytics** - 54 samples
4. **diesel_models** - 39 samples
5. **api_models** - 28 samples

## Sample Structure

Each sample contains:
- `id`: Unique identifier
- `type`: Sample type (always "clm" for causal language modeling)
- `granularity`: Level of code organization (file/module/combined_files/crate)
- `content`: Full code with path metadata in format:
  ```
  <path>
  Repository: hyperswitch
  Crate: [crate_name]
  File: [file_path]
  Tokens: [token_count]
  </path>

  <file>
  [actual code content]
  </file>
  ```
- `metadata`: Contains crate, file info, and token count

## Usage

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("archit11/hyperswitch-token-aware-cpt")

# Access samples
sample = dataset['train'][0]
print(f"Tokens: {sample['metadata']['token_count']:,}")
print(f"Crate: {sample['metadata']['crate']}")
print(f"Granularity: {sample['granularity']}")

# Filter by token count
medium_samples = dataset['train'].filter(
    lambda x: 4000 <= x['metadata']['token_count'] < 10000
)

# Filter by crate
router_samples = dataset['train'].filter(
    lambda x: x['metadata']['crate'] == 'router'
)
```

## Training Recommendations

- **Context Length**: 16k tokens (max sample is 15,609 tokens)
- **Tokenizer**: Kwaipilot/KAT-Dev
- **Suggested Batch Size**: 1-2 samples per batch (due to large context)
- **Format**: Samples are pre-formatted with `<path>` and `<file>` tags

## Source

- **Repository**: https://github.com/juspay/hyperswitch
- **Language**: Rust
- **License**: Apache 2.0

## Generation Method

Samples were generated using token-aware strategies:
1. **Large files** (2k-16k tokens) included as-is
2. **Small files** combined within same crate until reaching 2k+ tokens
3. **Module clusters** grouped by directory structure
4. **Complete crates** for small crates that fit within context

All token counts measured using the Kwaipilot/KAT-Dev tokenizer.