Commit History

Update README.md
77333b5
verified

gss1147 commited on

Update README.md
d7587be
verified

gss1147 commited on

Create README.md
fb0bc2d
verified

gss1147 commited on

--- language: - en library_name: transformers pipeline_tag: text-generation tags: - gpt2 - causal-lm - text-generation - code - coding - reasoning - instruct - lightweight - safetensors license: other base_model: openai-community/gpt2-medium datasets: - WithinUsAI/GPT-2-to-GPT-5-5k - TeichAI/gpt-5.1-high-reasoning-1000x - TeichAI/gpt-5.1-codex-max-1000x model-index: - name: WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B results: [] --- # Model Card Template (WithinUsAI Standard) **Top metadata:** YAML above **Sections:** Overview β†’ Intended Use β†’ How to Use β†’ Training Data β†’ Finetuning β†’ Evaluation β†’ Limitations β†’ License/Thanks β†’ Citation --- # WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B <p align="center"> <b>GPT-2 Medium enhanced toward β€œGPT-5.2-style” reasoning + codex behaviors.</b><br/> Small footprint. Serious coding focus. ⚑🧠 </p> ## Overview **WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B** is a GPT-2-family causal language model (β‰ˆ0.4B class) built from **`openai-community/gpt2-medium`** and fine-tuned by **WithIn Us AI** to strengthen: - structured reasoning - instruction following - code generation & refactoring reliability The name **β€œGPT2.5.2”** is a WithIn Us AI version marker: - **GPT(2)** = GPT-2 Medium base - **(5.2)** = target behavior style (reasoning + codex competence) - **(2.5.2)** = the enhanced line produced by WithIn Us AI fine-tuning + methodology **Architecture:** gpt2 **Model size:** 0.4B params **Tensor type:** F32 (as hosted) --- ## What it’s good at ✨ - Writing practical code with clear structure - Debugging: root cause β†’ fix β†’ corrected code - Refactoring with invariants + complexity notes - Algorithmic reasoning in compact, teachable steps --- ## Intended use ### Recommended βœ… - Coding assistant (Python-first; other languages okay) - Debugging and patch suggestions - Refactors and performance cleanups - Reasoned technical answers with steps/constraints ### Not recommended 🚫 - High-stakes decisions (medical/legal/financial) without expert review - Safety-critical systems without strict validation/testing --- ## How to use (Transformers) ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B" tok = AutoTokenizer.from_pretrained(model_id, use_fast=True) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, device_map="auto" ) prompt = ( "You are a senior engineer.\n" "Task: Implement a robust JSONL reader in Python.\n" "First list edge cases, then write the implementation with comments.\n\n" "Answer:\n" ) inputs = tok(prompt, return_tensors="pt").to(model.device) out = model.generate( **inputs, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95 ) print(tok.decode(out[0], skip_special_tokens=True))
11a7ede
verified

gss1147 commited on

Update README.md
ee18308
verified

gss1147 commited on

Update README.md
9a53d3d
verified

gss1147 commited on

Update README.md
7e6d6f1
verified

gss1147 commited on

Update README.md
4ea8174
verified

gss1147 commited on

Update README.md
8aa7f5d
verified

gss1147 commited on

Update README.md
8885c9d
verified

gss1147 commited on

gss1147/GPT5.1-high-reasoning-codex-0.4B-finetuned
0c6a911
verified

gss1147 commited on