File size: 1,331 Bytes
872c213
 
 
 
 
 
 
 
 
 
 
 
 
3d5e600
 
 
 
 
a6349a2
 
 
 
3d5e600
 
872c213
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---

This fine tune model is inspired by Nathan Lambert's talk "Traits of Next Generation Reasoning Models".
It introduces a structured multi-phase reasoning cycle for large language models (LLMs).

The fine tune model extends beyond simple question-answer pairs by adding explicit reasoning phases:

- Planning – The model outlines a step-by-step plan before attempting a solution.
- Answering – The model provides its initial solution.
- Double-Checking – The model revisits its answer, verifying correctness and coherence.
- Confidence – The model assigns a confidence score or justification for its final response.
This structure encourages models to reason more transparently, self-correct, and calibrate their confidence.

# Uploaded  model

- **Developed by:** EpistemeAI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit

This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)