Merry Christmas

Currently, we only provide a subset of evaluations from the lm_eval repository. If you have available computing resources, please help us complete the mainstream benchmarks for gpt-oss-46b.

lm_eval

Tasks Metric gpt-oss-20b gpt-oss-46b Improvement
GSM8K (0-shot) Exact Match (flexible) 0.2290 0.1638 -28.47%
LAMBADA (OpenAI) Accuracy 0.2038 0.2668 +30.91%

This model card is dedicated to the medium gpt-oss-46b model. Check out gpt-oss-20b for the smaller model. Check out gpt-oss-120b for the larger model.

Inference examples

Transformers

You can use gpt-oss with Transformers. If you use the Transformers chat template, it will automatically apply the harmony response format. If you use model.generate directly, you need to apply the harmony format manually using the chat template or use our openai-harmony package.

To get started, install the necessary dependencies to setup your environment:

pip install -U transformers kernels torch 

Once, setup you can proceed to run the model by running the snippet below:

from transformers import pipeline
import torch

model_id = "Jo1uck/gpt-oss-46b"

pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype="auto",
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]

outputs = pipe(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Alternatively, you can run the model via Transformers Serve to spin up a OpenAI-compatible webserver:

transformers serve
transformers chat localhost:8000 --model-name-or-path Jo1uck/gpt-oss-46b

Learn more about how to use gpt-oss with Transformers.

vLLM

vLLM recommends using uv for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.

uv pip install --pre vllm==0.10.1+gptoss \
    --extra-index-url https://wheels.vllm.ai/gpt-oss/ \
    --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
    --index-strategy unsafe-best-match

vllm serve Jo1uck/gpt-oss-46b

Learn more about how to use gpt-oss with vLLM.

Highlights

  • Permissive Apache 2.0 license: Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
  • Configurable reasoning effort: Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
  • Full chain-of-thought: Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
  • Fine-tunable: Fully customize models to your specific use case through parameter fine-tuning.
  • Agentic capabilities: Use the models’ native capabilities for function calling, web browsing, Python code execution, and Structured Outputs.
  • MXFP4 quantization: The models were post-trained with MXFP4 quantization of the MoE weights, making gpt-oss-120b run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the gpt-oss-46b model run within 16GB of memory. All evals were performed with the same MXFP4 quantization.

Downloads last month
12
Safetensors
Model size
46B params
Tensor type
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for Jo1uck/gpt-oss-46b

Quantizations
2 models