Update README.md
Browse files
README.md
CHANGED
|
@@ -38,30 +38,3 @@ Traditional fine-tuning often suffers from:
|
|
| 38 |
- **Catastrophic forgetting** when training on sequential datasets
|
| 39 |
- **Imbalanced capabilities** from single-source training
|
| 40 |
- **Style inconsistencies** across different task types
|
| 41 |
-
|
| 42 |
-
Our multi-phase approach with strategic layer freezing, replay buffers, and EWC regularization addresses these challenges systematically.
|
| 43 |
-
|
| 44 |
-
## Architecture
|
| 45 |
-
```text
|
| 46 |
-
GPT-OSS-20B Base Model
|
| 47 |
-
│
|
| 48 |
-
├─── Phase 1: Foundation Training
|
| 49 |
-
│ ├─ Data: GPT-5.2-codex-max (1000) + Claude 4.5 Opus (250) + Claude 4.5 Sonnet (250)
|
| 50 |
-
│ ├─ Layers: MLP + Attention
|
| 51 |
-
│ └─ Goal: Establish coding + reasoning foundation
|
| 52 |
-
│
|
| 53 |
-
├─── Phase 1.5: Knowledge Consolidation
|
| 54 |
-
│ ├─ Data: Mixed replay of Phase 1 data
|
| 55 |
-
│ ├─ Layers: MLP + Attention
|
| 56 |
-
│ └─ Goal: Prevent early forgetting
|
| 57 |
-
│
|
| 58 |
-
├─── Phase 2: Specialization Training
|
| 59 |
-
│ ├─ Data: Claude Sonnet (250) + GPT-5.2 high (250) + Replay (150)
|
| 60 |
-
│ ├─ Layers: MLP + Adapter
|
| 61 |
-
│ └─ Goal: Integrate balanced reasoning + maintain coding
|
| 62 |
-
│
|
| 63 |
-
└─── Phase 2.5: Gradual Unfreezing
|
| 64 |
-
├─ Data: Full mixed dataset
|
| 65 |
-
├─ Layers: Upper Attention layers + MLP + Adapter
|
| 66 |
-
└─ Goal: Fine-tune attention patterns if needed
|
| 67 |
-
```
|
|
|
|
| 38 |
- **Catastrophic forgetting** when training on sequential datasets
|
| 39 |
- **Imbalanced capabilities** from single-source training
|
| 40 |
- **Style inconsistencies** across different task types
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|