Datdanboi25 commited on
Commit
e489f8c
·
1 Parent(s): 9b9b850

init commit

Browse files
.gitignore ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ *.py
2
+ *.jsonl
3
+ .claude
4
+ __pycache__/
5
+ *.json
6
+ *.txt
7
+ Stacksystem/*
8
+ v2_changes.md
9
+ version_1_README.md
AxiomicBanner.png ADDED

Git LFS Details

  • SHA256: 75c38db1d76736a243905a29ba6e85eaf65e14037cd78c4f391bc146fe36fe4e
  • Pointer size: 130 Bytes
  • Size of remote file: 88 kB
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ - sentence-similarity
6
+ language:
7
+ - en
8
+ tags:
9
+ - code
10
+ pretty_name: NPset-2-Python-Edu
11
+ size_categories:
12
+ - 1M<n<10M
13
+ ---
14
+
15
+ ![Axiomic Banner](AxiomicBanner.png)
16
+
17
+ # NPset-2 (Python-Edu)
18
+
19
+ A normalized semi-synthetic Python dataset for training small language models on code logic without the overhead of raw code syntax.
20
+
21
+ ![Tokenizer chart](tokenizer_chart.png)
22
+
23
+ ## Why NPset-2?
24
+
25
+ Small language models develop latent logical representations but struggle with syntactic overhead. NPset-2 addresses this through the **TinyDSL v2** specification, which strips syntactic noise and provides explicit logical anchors.
26
+
27
+ ## The v2 Specification
28
+
29
+
30
+ NPset-2 introduces significant improvements over v1, designed to minimize "token tax" and maximize the model's ability to track long-range logical dependencies:
31
+
32
+ 1. **Explicit Block Scoping**: All indented blocks (if, for, while, try, with) now use numbered/named anchors: `begin if 1` ... `end if 1`. This provides unambiguous attention anchors for small models.
33
+ 2. **Natural Language Phrasing**:
34
+ * **Functions**: `function find_max with input numbers`
35
+ * **Calls**: `call fibonacci with n - 1`
36
+ * **Loops**: `exit loop` and `next loop` instead of `break` and `continue`.
37
+ 3. **Slicing**: Replaced symbol-heavy `[0:10]` with `starting from index 0 to 10`.
38
+ 4. **Semantic Normalization**:
39
+ * `isinstance(x, int)` -> `type of x is int`
40
+ * `lambda x: x+1` -> `function taking x returning x + 1`
41
+ * `async for` -> `async for` (removing forced underscores).
42
+ 5. **Strict English Filtering**: Documents with >0.5% Chinese characters are dropped, and all remaining text is scrubbed of non-ASCII characters to maintain a clean, English-only training distribution.
43
+
44
+ ## Performance (Context Capacity)
45
+
46
+ When tested against standard tokenizers, TinyDSL v2 significantly expands the effective context window for logic-heavy training:
47
+
48
+ | Tokenizer | Reduction (Tokens) | Context Capacity (2048 window) |
49
+ | :--- | :--- | :--- |
50
+ | **GPTX (Custom 32k)** | **+13.7%** | **7.1 -> 8.3 examples (+15.9%)** |
51
+ | **GPT-2** | **+16.6%** | **7.4 -> 8.9 examples (+19.9%)** |
52
+ | Qwen 2.5 | -8.1% | 10.1 -> 9.3 examples (-7.5%) |
53
+ | Llama 3 | -2.2% | 8.3 -> 8.1 examples (-2.2%) |
54
+
55
+ *Note: While raw character counts increase by ~17%, the "Token Tax" for logical constructs is drastically reduced for models not pre-specialized for code syntax.*
56
+
57
+ ## Format
58
+
59
+ Parquet format with the following schema:
60
+
61
+ | Field | Type | Description |
62
+ |---|---|---|
63
+ | `code` | string | Normalized TinyDSL v2 pseudocode |
64
+ | `original_code` | string | Original Python source |
65
+ | `original_language` | string | Always `python` |
66
+ | `score` | float | Quality/Difficulty score (if available from source) |
67
+
68
+ ## Sources
69
+
70
+ The dataset is compiled from high-quality educational and stack-overflow style Python sources, including:
71
+ - `dbands/pythonMath`
72
+ - `nomic-ai/cornstack-python-v1`
73
+ - `zaydzuhri/stack-edu-python`
74
+ - `jtatman/python-code-dataset-500k`
75
+ - And other curated sources.
data/shard_00000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfcb064ec3fd50a3bbf6ab1d72be9c2acbcf1fe451e94c9a4ba013f923ef1833
3
+ size 693118780
data/shard_00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:351817e1362280f0ff8544f22af8652a949561102821a4eb6c9a3330d25d55f4
3
+ size 646783007
data/shard_00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:361ff375022a9e7d5f7f0f535716289208cd9e3eb4d768f63aa0f3e033a10658
3
+ size 633645813
data/shard_00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bdaeb6a84d9d0de6c2c2c859dcfdbe32ff81689b0f6f03015a36731d125faff
3
+ size 635588531
data/shard_00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a8edb0da8ef3aea8c0155802f6e4d0523428d0391a03389e3131936bffabf6b
3
+ size 645343997
data/shard_00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:438eb54e8d8757892073ecdbd3ab562a376f11b214110a2ba0bc283cd63bf5de
3
+ size 652376247
data/shard_00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:179e57c7498bdd483893f6e21890c32878c5c20788bb1144b6a83d01d09a863a
3
+ size 670404777
data/shard_00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f013334a6bc0d8ec4396594a47dc39abd939c59234bdcc6a677ca675cc833b3
3
+ size 689823032
data/shard_00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29a628aebac56d6a4a5caa27769358065f10de1e833180780e337692eeb34744
3
+ size 714924244
data/shard_00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d76ceb0e1c85e9bed79d76fbbbde6a9bfe6ba22dd122f8c5ad9c5b08bae5142
3
+ size 736037344
tokenizer_chart.png ADDED

Git LFS Details

  • SHA256: e020bff51bd7fb2b71a3bd32f98ec1d3ab58eb8d2edc045f499d86fd3c1ae6f5
  • Pointer size: 131 Bytes
  • Size of remote file: 263 kB