Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Attention & Routing Matrices of some MoE-LLMs
2
+
3
+ ## Dataset structure
4
+
5
+ ```
6
+ ...
7
+ ./
8
+ ├─ nhap.py
9
+ ├─ usage.py
10
+ ├─ usage_example.ipynb
11
+ ├─ outputs/
12
+ │ ├─ mixtral-8x7b/
13
+ │ │ ├─ routing_matrices.npz
14
+ │ │ ├─ attention_matrices_multihead.npz
15
+ │ │ ├─ tokenized_input.npz
16
+ │ │ └─ metadata.txt
17
+ │ ├─ olmoe-7b/
18
+ │ │ └─ ...
19
+ │ ├─ qwen-moe-a2.7b/
20
+ │ │ └─ ...
21
+ │ └─ qwen-moe-a2.7b-chat/
22
+ │ └─ ...
23
+ ```
24
+
25
+ Top directory contain:
26
+ - `outputs.zip`: contain attention and routing matrices
27
+ - `nhap.py`: code to extract attention and routing matrices
28
+ - `usage.py` and `usage_example.ipynb`: example loading of attention and routing matrices
29
+
30
+ Each subfolder `outputs/<model>` contain
31
+ - `attention_matrices_multihead.npz` - dict `{layer_id -> attn_matrix}` with shape `[num_heads, seq_len, seq_len]`
32
+ - `routing_matrices.npz` - dict `{layer_id -> routing_logits}` with shape `[seq_len, num_experts]`
33
+ - `tokenized_input.npz` - dict with `input_ids` store token ids of input.
34
+ - `metadata.txt` - model/context information
35
+
36
+ ## Requirements
37
+ - python 3.10
38
+ - numpy 1.24.0
39
+ - pytorch 2.4.0
40
+ - transformers 4.57.0
41
+ - datasets 4.1.1
42
+
43
+ ## To use
44
+ - Download this dataset
45
+
46
+ ```
47
+ hf download sg-nta/llm-attention --repo-type dataset
48
+ ```
49
+
50
+ - unzip outputs.zip
51
+
52
+ ```
53
+ unzip outputs.zip
54
+ ```
55
+
56
+ - See `usage.py` and `usage_example.ipynb` for example of loading matrices from `outputs/`
57
+