--- license: apache-2.0 task_categories: - text-generation - sentence-similarity language: - en tags: - code pretty_name: NPset-2-Python-Edu size_categories: - 1M `type of x is int` * `lambda x: x+1` -> `function taking x returning x + 1` * `async for` -> `async for` (removing forced underscores). 5. **Strict English Filtering**: Documents with >0.5% Chinese characters are dropped, and all remaining text is scrubbed of non-ASCII characters to maintain a clean, English-only training distribution. ## Performance (Context Capacity) When tested against standard tokenizers, TinyDSL v2 significantly expands the effective context window for logic-heavy training with natural language tokenizers: | Tokenizer | Reduction (Tokens) | Context Capacity (2048 window) | | :--- | :--- | :--- | | **GPTX (Custom 32k)** | **13.7%** | **7.1 -> 8.3 examples (+15.9%)** | | **GPT-2** | **16.6%** | **7.4 -> 8.9 examples (+19.9%)** | | Qwen 2.5 | 8.1% | 10.1 -> 9.3 examples (-7.5%) | | Llama 3 | 2.2% | 8.3 -> 8.1 examples (-2.2%) | *Note: While raw character counts increase by ~17%, the "Token Tax" for logical constructs is drastically reduced for models not pre-specialized for code syntax.* ## Format Parquet format with the following schema: | Field | Type | Description | |---|---|---| | `code` | string | Normalized TinyDSL v2 pseudocode | | `original_code` | string | Original Python source | | `original_language` | string | Always `python` | | `score` | float | Quality/Difficulty score (if available from source) | ## Sources - `HuggingFaceTB/stack-edu (python)`