File size: 738 Bytes
dc012e1
 
 
 
 
7199e31
 
 
 
dc012e1
 
7199e31
dc012e1
7199e31
 
dc012e1
7199e31
dc012e1
7199e31
dc012e1
7199e31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-235B-A22B
tags:
- prune
---

Same methodology as Kalomaze's 16B experiment : https://huggingface.co/kalomaze/Qwen3-16B-A3B/

- measure the probability that any given expert will activate (over a personal set of fairly diverse calibration data), per layer
- prune some of the least used experts per layer (with reordered router and indexing per layer)

---

Currently it is unusable but i am working on training it over a small SFT of claude Instruct data to "heal" it per say.

https://wandb.ai/new-eden/Prune-Experiments/runs/45utvk5c?nw=nwuserdeltavector