gpt-oss-20b
Collection
gpt-oss-20b pre-trained model by midorin-Linux
•
4 items
•
Updated
This project uses Unsloth for fine-tuning. All training data is converted to OpenAI Harmony format before training, but there may be cases where the output format doesn't conform to the OpenAI Harmony specification.
You can download pre-trained data from HuggingFace.
Safetensors repo: midorin-Linux/gpt-oss-20b-Coding-Distill
GGUF repo: midorin-Linux/gpt-oss-20b-Coding-Distill-GGUF
This project implements a sophisticated multi-phase fine-tuning pipeline for the GPT-OSS-20B model, leveraging conversation data from multiple state-of-the-art AI models to create a balanced, high-performance language model optimized for:
Why This Approach? Traditional fine-tuning often suffers from: