gss1147's picture
Update README.md
77333b5 verified
# =========================
# ORIGINAL MODEL (Transformers) — README.md (FINAL)
# Repo: WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B
# =========================
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- gpt2
- causal-lm
- text-generation
- code
- coding
- reasoning
- instruct
- lightweight
- safetensors
- withinusai
license: other
license_name: withinusai-custom-license
license_link: LICENSE
base_model: openai-community/gpt2-medium
base_model_relation: finetune
datasets:
- WithinUsAI/GPT-2-to-GPT-5-5k
- TeichAI/gpt-5.1-codex-max-1000x
- TeichAI/gpt-5.1-high-reasoning-1000x
metrics:
- pass@1
- accuracy
- exact_match
model-index:
- name: WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B
results: []
---
# WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B
<p align="center">
<b>GPT-2 Medium enhanced toward GPT-5.2-style reasoning + codex behavior.</b><br/>
Small footprint. Built to ship working code. ⚡🧠
</p>
## What “GPT2.5.2” means (project naming)
This model begins as **GPT-2 Medium** and is fine-tuned by **WithIn Us AI** with the goal of pushing behavior toward a **GPT-5.2 “twin target”** style: stronger stepwise reasoning, more reliable code generation, and improved instruction-following.
- **GPT(2)** = GPT-2 Medium base
- **GPT(5.2)** = target behavior style (reasoning + codex competence)
- **GPT(2.5.2)** = WithIn Us AI enhanced release line/version marker
## Model details
- **Model type:** Decoder-only causal language model (GPT-2 family)
- **Architecture:** gpt2
- **Size class:** ~0.4B parameters (approx.)
- **Base model:** `openai-community/gpt2-medium`
- **Base model relation:** fine-tune
- **Primary strengths:** coding assistance, refactors, debugging, structured reasoning
## Intended use
### Recommended ✅
- Code generation & completion (Python-first; multi-language ok)
- Debugging: error → root cause → patch
- Refactoring: preserve behavior, improve clarity/perf
- Stepwise technical reasoning with constraints and edge cases
### Not recommended 🚫
- High-stakes decisions (medical/legal/financial) without expert review
- Safety-critical systems without strict validation & monitoring
## Quickstart (Transformers)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "WithinUsAI/GPT2.5.2-high-reasoning-codex-0.4B"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
device_map="auto"
)
prompt = (
"You are a senior software engineer.\n"
"Task: Implement a robust JSONL reader in Python.\n"
"First list edge cases, then write the implementation with comments.\n\n"
"Answer:\n"
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
out = model.generate(
**inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_p=0.95
)
print(tokenizer.decode(out[0], skip_special_tokens=True))