File size: 3,541 Bytes
e489f8c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
222b911
 
 
e489f8c
 
 
 
222b911
e489f8c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
222b911
e489f8c
 
 
222b911
 
 
 
e489f8c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
222b911
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: apache-2.0
task_categories:
- text-generation
- sentence-similarity
language:
- en
tags:
- code
pretty_name: NPset-2-Python-Edu
size_categories:
- 1M<n<10M
---

![Axiomic Banner](AxiomicBanner.png)

# NPset-2 (Python-Edu)

A normalized semi-synthetic Python dataset for training small language models on code logic without the overhead of raw code syntax.

![Tokenizer chart](tokenizer_chart.png)

## Why

Small language models trained on natural language corpora develop latent representations of logical constructs -- iteration, conditionals, data flow, function composition -- yet struggle to apply this reasoning to source code, where syntactic overhead (delimiters, indentation conventions, language-specific idioms) occupies a disproportionate share of the token budget, requires a vocabulary of code-specific tokens, and introduces a surface-form distribution shift relative to the model's prior knowledge. NPset-2 addresses this by normalizing Python source through an AST-based converter that strips syntactic noise while preserving the full logical structure of each program, producing a pseudocode representation composed entirely of natural language tokens that aligns more directly with the semantic representations already present in small models, allowing them to reason about what code *does* rather than expending capacity learning what it *looks like*.


## The v2 Specification

NPset-2 introduces significant improvements over v1, trading some relative token compression for far lower semantic overhead.


1.  **Explicit Block Scoping**: All indented blocks (if, for, while, try, with) now use numbered/named anchors: `begin if 1` ... `end if 1`. This provides unambiguous attention anchors for small models.
2.  **Natural Language Phrasing**:
    *   **Functions**: `function find_max with input numbers`
    *   **Calls**: `call fibonacci with n - 1`
    *   **Loops**: `exit loop` and `next loop` instead of `break` and `continue`.
3.  **Slicing**: Replaced symbol-heavy `[0:10]` with `starting from index 0 to 10`.
4.  **Semantic Normalization**: 
    *   `isinstance(x, int)` -> `type of x is int`
    *   `lambda x: x+1` -> `function taking x returning x + 1`
    *   `async for` -> `async for` (removing forced underscores).
5.  **Strict English Filtering**: Documents with >0.5% Chinese characters are dropped, and all remaining text is scrubbed of non-ASCII characters to maintain a clean, English-only training distribution.

## Performance (Context Capacity)

When tested against standard tokenizers, TinyDSL v2 significantly expands the effective context window for logic-heavy training with natural language tokenizers:

| Tokenizer | Reduction (Tokens) | Context Capacity (2048 window) |
| :--- | :--- | :--- |
| **GPTX (Custom 32k)** | **13.7%** | **7.1 -> 8.3 examples (+15.9%)** |
| **GPT-2** | **16.6%** | **7.4 -> 8.9 examples (+19.9%)** |
| Qwen 2.5 | 8.1% | 10.1 -> 9.3 examples (-7.5%) |
| Llama 3 | 2.2% | 8.3 -> 8.1 examples (-2.2%) |

*Note: While raw character counts increase by ~17%, the "Token Tax" for logical constructs is drastically reduced for models not pre-specialized for code syntax.*

## Format

Parquet format with the following schema:

| Field | Type | Description |
|---|---|---|
| `code` | string | Normalized TinyDSL v2 pseudocode |
| `original_code` | string | Original Python source |
| `original_language` | string | Always `python` |
| `score` | float | Quality/Difficulty score (if available from source) |

## Sources

- `HuggingFaceTB/stack-edu (python)`