Attention & Routing Matrices of some MoE-LLMs
Dataset structure
...
./
├─ nhap.py
├─ usage.py
├─ usage_example.ipynb
├─ outputs/
│ ├─ mixtral-8x7b/
│ │ ├─ routing_matrices.npz
│ │ ├─ attention_matrices_multihead.npz
│ │ ├─ tokenized_input.npz
│ │ └─ metadata.txt
│ ├─ olmoe-7b/
│ │ └─ ...
│ ├─ qwen-moe-a2.7b/
│ │ └─ ...
│ └─ qwen-moe-a2.7b-chat/
│ └─ ...
Top directory contain:
outputs.zip: contain attention and routing matricesnhap.py: code to extract attention and routing matricesusage.pyandusage_example.ipynb: example loading of attention and routing matrices
Each subfolder outputs/<model> contain
attention_matrices_multihead.npz- dict{layer_id -> attn_matrix}with shape[num_heads, seq_len, seq_len]routing_matrices.npz- dict{layer_id -> routing_logits}with shape[seq_len, num_experts]tokenized_input.npz- dict withinput_idsstore token ids of input.metadata.txt- model/context information
Requirements
- python 3.10
- numpy 1.24.0
- pytorch 2.4.0
- transformers 4.57.0
- datasets 4.1.1
To use
- Download this dataset
hf download sg-nta/llm-attention --repo-type dataset
- unzip outputs.zip
unzip outputs.zip
- See
usage.pyandusage_example.ipynbfor example of loading matrices fromoutputs/