gpt-oss-deepplan / README.md
legolasyiu's picture
Update README.md
a6349a2 verified
metadata
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - gpt_oss
  - trl
license: apache-2.0
language:
  - en

This fine tune model is inspired by Nathan Lambert's talk "Traits of Next Generation Reasoning Models". It introduces a structured multi-phase reasoning cycle for large language models (LLMs).

The fine tune model extends beyond simple question-answer pairs by adding explicit reasoning phases:

  • Planning – The model outlines a step-by-step plan before attempting a solution.
  • Answering – The model provides its initial solution.
  • Double-Checking – The model revisits its answer, verifying correctness and coherence.
  • Confidence – The model assigns a confidence score or justification for its final response. This structure encourages models to reason more transparently, self-correct, and calibrate their confidence.

Uploaded model

  • Developed by: EpistemeAI
  • License: apache-2.0
  • Finetuned from model : unsloth/gpt-oss-20b-unsloth-bnb-4bit

This gpt_oss model was trained 2x faster with Unsloth and Huggingface's TRL library.