logo-gpt-oss-edu

GPT-OSS for education!

This model is a fine-tuned version of gpt-oss-20b that has been trained to act as a teacher, providing accurate feedback, guiding the student towards the answer without giving it right away. It also has an encouraging and friendly personality!

This work is inspired by AI tutoring outperforms in-class active learning: an RCT introducing a novel research-based design in an authentic educational setting so I would advise anyone to read the paper as it provide interesting insights on how to make learning with LLMs fun and efficient!

Inference with unsloth

Install dependancies

%%capture
import os, importlib.util
!pip install --upgrade -qqq uv
if importlib.util.find_spec("torch") is None or "COLAB_" in "".join(os.environ.keys()):    
    try: import numpy, PIL; get_numpy = f"numpy=={numpy.__version__}"; get_pil = f"pillow=={PIL.__version__}"
    except: get_numpy = "numpy"; get_pil = "pillow"
    !uv pip install -qqq \
        "torch>=2.8.0" "triton>=3.4.0" {get_numpy} {get_pil} torchvision bitsandbytes "transformers==4.56.2" \
        "unsloth_zoo[base] @ git+https://github.com/unslothai/unsloth-zoo" \
        "unsloth[base] @ git+https://github.com/unslothai/unsloth" \
        git+https://github.com/triton-lang/triton.git@0add68262ab0a2e33b84524346cb27cbb2787356#subdirectory=python/triton_kernels
elif importlib.util.find_spec("unsloth") is None:
    !uv pip install -qqq unsloth
!uv pip install --upgrade --no-deps transformers==4.56.2 tokenizers trl==0.22.2 unsloth unsloth_zoo

Load the model

from unsloth import FastLanguageModel
import torch
max_seq_length = 1024
dtype = None

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "paulprt/gpt-oss-edu",
    dtype = dtype, # None for auto detection
    max_seq_length = max_seq_length, # Choose any for long context!
    load_in_4bit = True,  # 4 bit quantization to reduce memory
)

Generate a chat completion

messages = [
    {"role": "user", "content": "### Problem statement : Find all positive integers n such that φ(n) = 12.\n###Answer: n = 13, 21, 26, 28, 36, 42.\n### Student question : I know that if n is prime, φ(n)=n-1, so n=13 is one solution. But how do I find the composite numbers? Can you guide me through the steps?"}
]
inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt = True,
    return_tensors = "pt",
    return_dict = True,
    reasoning_effort = "low", # Increase reasoning effort to avoid calculation errors
).to("cuda")
from transformers import TextStreamer
_ = model.generate(**inputs, max_new_tokens = 512, streamer = TextStreamer(tokenizer))

Message formatting

The model was trained using a template for the first user input, that contains the problem, answer, and student question:

messages = [
    {"role": "user", "content": "### Problem statement:\nLet $A$ be a $3 \times 3$ matrix with eigenvalues $\\lambda_1 = 2$, $\\lambda_2 = -1$, and $\\lambda_3 = 3$. The corresponding eigenvectors are $\\mathbf{v}_1 = [1, 0, 1]$, $\\mathbf{v}_2 = [0, 1, -1]$, and $\\mathbf{v}_3 = [1, 1, 0]$. Find the matrix $A$. \n### Student question:\nCan you explain me how to use the eigen decomposition formula to find the matrix?"}
]

When building an app the exercise details and answer of this first prompt should be hidden in the conversation. The exercise itself can be displayed on the interface.

  • Developed by: paulprt
  • License: cc-by-nc-4.0
  • Finetuned from model : unsloth/gpt-oss-20b-unsloth-bnb-4bit
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for paulprt/gpt-oss-edu

Base model

openai/gpt-oss-20b
Finetuned
(503)
this model