midorin-Linux commited on
Commit
e7b389a
·
verified ·
1 Parent(s): cebe538

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -27,8 +27,8 @@ We have actually trained the code and made it publicly available on GitHub. The
27
  ## Do you want to use pre-trained model?
28
  You can download pre-trained data from HuggingFace.
29
 
30
- **Safetensors repo**: [midorin-Linux/gpt-oss-20b-Coding-Distill](https://huggingface.co/midorin-Linux/gpt-oss-20b-Coding-Distill/edit/main/README.md)
31
- **GGUF repo**: In Preparation.
32
 
33
  ## Overview
34
  This project implements a sophisticated multi-phase fine-tuning pipeline for the GPT-OSS-20B model, leveraging conversation data from multiple state-of-the-art AI models to create a balanced, high-performance language model optimized for:
@@ -70,5 +70,3 @@ GPT-OSS-20B Base Model
70
  ├─ Layers: Upper Attention layers + MLP + Adapter
71
  └─ Goal: Fine-tune attention patterns if needed
72
  ```
73
-
74
-
 
27
  ## Do you want to use pre-trained model?
28
  You can download pre-trained data from HuggingFace.
29
 
30
+ **Safetensors repo**: [midorin-Linux/gpt-oss-20b-Coding-Distill](https://huggingface.co/midorin-Linux/gpt-oss-20b-Coding-Distill)
31
+ **GGUF repo**: [midorin-Linux/gpt-oss-20b-Coding-Distill-GGUF](https://huggingface.co/midorin-Linux/gpt-oss-20b-Coding-Distill-GGUF)
32
 
33
  ## Overview
34
  This project implements a sophisticated multi-phase fine-tuning pipeline for the GPT-OSS-20B model, leveraging conversation data from multiple state-of-the-art AI models to create a balanced, high-performance language model optimized for:
 
70
  ├─ Layers: Upper Attention layers + MLP + Adapter
71
  └─ Goal: Fine-tune attention patterns if needed
72
  ```