llm-attention / README.md
sg-nta's picture
Upload README.md (#1)
f8a91f0 verified

Attention & Routing Matrices of some MoE-LLMs

Dataset structure

...
./
├─ nhap.py
├─ usage.py
├─ usage_example.ipynb
├─ outputs/
│  ├─ mixtral-8x7b/
│  │  ├─ routing_matrices.npz
│  │  ├─ attention_matrices_multihead.npz
│  │  ├─ tokenized_input.npz
│  │  └─ metadata.txt
│  ├─ olmoe-7b/
│  │  └─ ...
│  ├─ qwen-moe-a2.7b/
│  │  └─ ...
│  └─ qwen-moe-a2.7b-chat/
│     └─ ...

Top directory contain:

  • outputs.zip: contain attention and routing matrices
  • nhap.py: code to extract attention and routing matrices
  • usage.py and usage_example.ipynb: example loading of attention and routing matrices

Each subfolder outputs/<model> contain

  • attention_matrices_multihead.npz - dict {layer_id -> attn_matrix} with shape [num_heads, seq_len, seq_len]
  • routing_matrices.npz - dict {layer_id -> routing_logits} with shape [seq_len, num_experts]
  • tokenized_input.npz - dict with input_ids store token ids of input.
  • metadata.txt - model/context information

Requirements

  • python 3.10
  • numpy 1.24.0
  • pytorch 2.4.0
  • transformers 4.57.0
  • datasets 4.1.1

To use

  • Download this dataset
hf download sg-nta/llm-attention --repo-type dataset
  • unzip outputs.zip
unzip outputs.zip
  • See usage.py and usage_example.ipynb for example of loading matrices from outputs/