| # Hyperswitch Token-Aware CPT Dataset | |
| This dataset contains **1,076 samples** of Rust code from the [Hyperswitch](https://github.com/juspay/hyperswitch) payment router project, optimized for Continued Pre-Training (CPT) with the **Kwaipilot/KAT-Dev** tokenizer. | |
| ## Dataset Statistics | |
| - **Total Samples**: 1,076 | |
| - **Total Tokens**: 5,687,255 | |
| - **Mean Tokens per Sample**: 5,285 | |
| - **Token Range**: 2,001 - 15,609 | |
| ### Token Distribution | |
| - **< 4k tokens**: 38.1% of samples | |
| - **4k-10k tokens**: 52.0% of samples | |
| - **10k+ tokens**: 9.9% of samples | |
| ### Granularity Types | |
| - **file**: 721 samples (single large files) | |
| - **module**: 180 samples (multiple files from same module) | |
| - **combined_files**: 160 samples (small files combined by crate) | |
| - **crate**: 15 samples (entire small crates) | |
| ## Top Crates | |
| 1. **router** - 371 samples | |
| 2. **hyperswitch_connectors** - 336 samples | |
| 3. **analytics** - 54 samples | |
| 4. **diesel_models** - 39 samples | |
| 5. **api_models** - 28 samples | |
| ## Sample Structure | |
| Each sample contains: | |
| - `id`: Unique identifier | |
| - `type`: Sample type (always "clm" for causal language modeling) | |
| - `granularity`: Level of code organization (file/module/combined_files/crate) | |
| - `content`: Full code with path metadata in format: | |
| ``` | |
| <path> | |
| Repository: hyperswitch | |
| Crate: [crate_name] | |
| File: [file_path] | |
| Tokens: [token_count] | |
| </path> | |
| <file> | |
| [actual code content] | |
| </file> | |
| ``` | |
| - `metadata`: Contains crate, file info, and token count | |
| ## Usage | |
| ```python | |
| from datasets import load_dataset | |
| # Load the dataset | |
| dataset = load_dataset("archit11/hyperswitch-token-aware-cpt") | |
| # Access samples | |
| sample = dataset['train'][0] | |
| print(f"Tokens: {sample['metadata']['token_count']:,}") | |
| print(f"Crate: {sample['metadata']['crate']}") | |
| print(f"Granularity: {sample['granularity']}") | |
| # Filter by token count | |
| medium_samples = dataset['train'].filter( | |
| lambda x: 4000 <= x['metadata']['token_count'] < 10000 | |
| ) | |
| # Filter by crate | |
| router_samples = dataset['train'].filter( | |
| lambda x: x['metadata']['crate'] == 'router' | |
| ) | |
| ``` | |
| ## Training Recommendations | |
| - **Context Length**: 16k tokens (max sample is 15,609 tokens) | |
| - **Tokenizer**: Kwaipilot/KAT-Dev | |
| - **Suggested Batch Size**: 1-2 samples per batch (due to large context) | |
| - **Format**: Samples are pre-formatted with `<path>` and `<file>` tags | |
| ## Source | |
| - **Repository**: https://github.com/juspay/hyperswitch | |
| - **Language**: Rust | |
| - **License**: Apache 2.0 | |
| ## Generation Method | |
| Samples were generated using token-aware strategies: | |
| 1. **Large files** (2k-16k tokens) included as-is | |
| 2. **Small files** combined within same crate until reaching 2k+ tokens | |
| 3. **Module clusters** grouped by directory structure | |
| 4. **Complete crates** for small crates that fit within context | |
| All token counts measured using the Kwaipilot/KAT-Dev tokenizer. | |