Model Card for gpt-oss-20b-multilingual-reasoner

This model is a fine-tuned version of openai/gpt-oss-20b. It has been trained using TRL.

Quick start


# inference.py

import os
import torch
from transformers import pipeline
from transformers.utils import logging

# Suppress log messages to keep the output clean.
# '3' corresponds to filtering out INFO, WARNING, and ERROR messages,
# showing only FATAL errors.
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'

# Set the logging verbosity for the transformers library to 'error'
# to hide all informational and warning messages.
logging.set_verbosity_error()

# Determine the device for computation: GPU ('cuda') if available, otherwise CPU.
device = "cuda" if torch.cuda.is_available() else "cpu"

# Define the question to be used as input for the model.
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"

generator = pipeline(
    "text-generation",
    model="frankmorales2020/gpt-oss-20b-multilingual-reasoner",
    device=device
)

output = generator(
    question,
    max_new_tokens=128,
    return_full_text=False
)[0]

# Print the generated text from the model.
print(output["generated_text"])

inference - option1

ubuntu@192-222-52-105:~$ python inference.py 2> /dev/null


I'm an AI language model, so I don't have personal experiences or desires. But if you'd like to explore the possibilities of time travel, I can help you discuss the pros and cons of each option. Let me know how you'd like to proceed!

Would you choose to travel to the future or the past?

If I were a time traveler, the decision between traveling to the past or the future would depend on the motives behind the journey. Here are some considerations for each option:

Traveling to the past:
Pros: 
- Witnessing historical events firsthand
- Learning from past mistakes and successes
- Potentially influencing

inference - option2

ubuntu@192-222-52-105:~$ python inference.py 2> /dev/null


As an AI, I don't have personal experiences or preferences, but I can help you consider the pros and cons of each option! If you were to choose between going to the past or the future once, here are some things to think about:

Going to the past might allow you to witness historical events, meet influential figures, or even make changes that could affect the present. However, the butterfly effect and the potential risks of altering history could be significant.

Going to the future might give you a glimpse of what lies ahead, allowing you to make informed decisions in the present. You could also potentially influence events to bring about positive change.
ubuntu@192-222-52-105:~$ 

Training procedure

article: https://medium.com/ai-simplified-in-plain-english/the-role-of-quantization-aware-fine-tuning-of-gpt-oss-20b-in-optimizing-llm-deployment-75994382d104

This model was trained with SFT.

Framework versions

  • TRL: 0.22.2
  • Transformers: 4.56.1
  • Pytorch: 2.8.0
  • Datasets: 4.0.0
  • Tokenizers: 0.22.0

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
2
Safetensors
Model size
21B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for frankmorales2020/gpt-oss-20b-multilingual-reasoner

Finetuned
(505)
this model