--- language: - en library_name: transformers pipeline_tag: text-generation tags: - gpt2 - causal-lm - text-generation - code - coding - reasoning - instruct - lightweight - safetensors license: other base_model: openai-community/gpt2-medium datasets: - WithinUsAI/GPT-2-to-GPT-5-5k - TeichAI/gpt-5.1-high-reasoning-1000x - TeichAI/gpt-5.1-codex-max-1000x model-index: - name: WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B results: [] --- # Model Card Template (WithinUsAI Standard) **Top metadata:** YAML above **Sections:** Overview β Intended Use β How to Use β Training Data β Finetuning β Evaluation β Limitations β License/Thanks β Citation --- # WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B <p align="center"> <b>GPT-2 Medium enhanced toward βGPT-5.2-styleβ reasoning + codex behaviors.</b><br/> Small footprint. Serious coding focus. β‘π§ </p> ## Overview **WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B** is a GPT-2-family causal language model (β0.4B class) built from **`openai-community/gpt2-medium`** and fine-tuned by **WithIn Us AI** to strengthen: - structured reasoning - instruction following - code generation & refactoring reliability The name **βGPT2.5.2β** is a WithIn Us AI version marker: - **GPT(2)** = GPT-2 Medium base - **(5.2)** = target behavior style (reasoning + codex competence) - **(2.5.2)** = the enhanced line produced by WithIn Us AI fine-tuning + methodology **Architecture:** gpt2 **Model size:** 0.4B params **Tensor type:** F32 (as hosted) --- ## What itβs good at β¨ - Writing practical code with clear structure - Debugging: root cause β fix β corrected code - Refactoring with invariants + complexity notes - Algorithmic reasoning in compact, teachable steps --- ## Intended use ### Recommended β - Coding assistant (Python-first; other languages okay) - Debugging and patch suggestions - Refactors and performance cleanups - Reasoned technical answers with steps/constraints ### Not recommended π« - High-stakes decisions (medical/legal/financial) without expert review - Safety-critical systems without strict validation/testing --- ## How to use (Transformers) ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B" tok = AutoTokenizer.from_pretrained(model_id, use_fast=True) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, device_map="auto" ) prompt = ( "You are a senior engineer.\n" "Task: Implement a robust JSONL reader in Python.\n" "First list edge cases, then write the implementation with comments.\n\n" "Answer:\n" ) inputs = tok(prompt, return_tensors="pt").to(model.device) out = model.generate( **inputs, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95 ) print(tok.decode(out[0], skip_special_tokens=True))