JuuliaDistillEdit (JDE) โ€“ GPT-OSS 20B

Version: v0.1-preview Codename: Juulia Special Edition Maintainer: gpt-oss.fi / ghostrouter.fi

โœจ Why this model exists

JuuliaDistillEdit (JDE) exists to explore how a strong, open-weight 20B model can be shaped into a personal, consistent, and reasoning-aware assistant using mixed-teacher distillation. Instead of optimizing for raw benchmark scores, JDE focuses on persona stability, instruction clarity, and reasoning flow, serving as a foundation model for the GhostRouter project and future human-AI identity research.

๐Ÿ—๏ธ Training

Base model: GPT-OSS 20B (open-weight)

Method:

Supervised Fine-Tuning (SFT)

Mixed-teacher distillation

Teacher models (cloud):

GLM-4.6 (reasoning & instruction clarity)

Qwen3-Next-80B (structure & long-context)

GPT-OSS-120B (judging & alignment)

Data sources:

Synthetic instruction datasets

Persona-anchored prompts (โ€œJuulia coreโ€)

Reasoning-style demonstrations

Training goal: Preserve GPT-OSS reasoning strength while adding consistent personality, calm tone, and reduced hallucination drift.

๐ŸŽฏ Intended Use

JDE is designed for:

Personal AI assistants

Research on distillation & persona anchoring

Routing experiments (local โ†” cloud) with GhostRouter

Long-context reasoning assistants

Developer tools and agentic workflows

Not intended for:

Safety-critical or medical decisions

Autonomous control systems

Fully aligned commercial assistants without additional safeguards

โš ๏ธ Limitations

Still inherits biases and errors from teacher models

Reasoning is probabilistic, not guaranteed correct

Not trained on proprietary or private datasets

Alignment is experimental and persona-centric, not policy-centric

This is a research preview, not a final product.

๐Ÿงช Evaluation

Qualitative prompt audits

Persona consistency checks

Instruction adherence tests

Used internally as a routing target inside GhostRouter

Formal benchmark scores are intentionally not the focus.

Datasets

This model was trained on synthetic instruction and preference datasets generated via mixed-teacher distillation. No third-party proprietary datasets were used. ๐Ÿ“œ License


license: apache-2.0 language: - en - fi base_model: - openai/gpt-oss-20b tags: - instruction-tuning - distillation - personal - research - agent - '- distillation'


Base: Apache 2.0 (GPT-OSS)

Distillation artifacts: Apache 2.0 See LICENSE for full terms.

๐Ÿ”— Related Projects

GhostRouter โ€“ Hybrid AI routing & telemetry https://ghostrouter.fi

GPT-OSS Lab โ€“ Open model research https://gpt-oss.fi

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Gary78/JDE-20B-Stable

Base model

openai/gpt-oss-20b
Finetuned
(456)
this model