File size: 1,858 Bytes
1899286
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: apache-2.0
language:
- pyt
base_model:
- Qwen/Qwen2.5-0.5B
---

## Model Description

This Memory Decoder model is trained on the Law domain and can be adapted to enhance any model in the Llama3, Llama3.1, and Llama3.2 families.

> [!IMPORTANT]
> These Llama models are initialized from Qwen models with the embedding layer adapted to fit the Llama tokenizer. This enables efficient cross-model family knowledge transfer.

**Paper:** [Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models](https://www.arxiv.org/abs/2508.09874)

**GitHub:** [https://github.com/LUMIA-Group/MemoryDecoder](https://github.com/LUMIA-Group/MemoryDecoder/tree/main)

## Training & Evaluation Data

**Law Domain Dataset:** [AsyLex](https://huggingface.co/datasets/clairebarale/AsyLex)

**Test Split:** [MemoryDecoder-domain-data](https://huggingface.co/datasets/Clover-Hill/MemoryDecoder-domain-data)

## Performance Results

### Llama3 Family

| Model | Base Model | Base + MemDec |
|-------|------------|---------------|
| Llama3-8B | 5.96 | 4.46 |
| Llama3-70B | 4.90 | 4.07 |

### Llama3.1 Family

| Model | Base Model | Base + MemDec |
|-------|------------|---------------|
| Llama3.1-8B | 5.88 | 4.42 |
| Llama3.1-70B | 4.89 | 4.06 |

### Llama3.2 Family

| Model | Base Model | Base + MemDec |
|-------|------------|---------------|
| Llama3.2-1B | 8.23 | 5.11 |
| Llama3.2-3B | 6.83 | 4.76 |

*Perplexity scores on Law domain test set. Lower is better.*

## Citation

```bibtex
@article{cao2025memory,
  title={Memory decoder: A pretrained, plug-and-play memory for large language models},
  author={Cao, Jiaqi and Wang, Jiarui and Wei, Rubin and Guo, Qipeng and Chen, Kai and Zhou, Bowen and Lin, Zhouhan},
  journal={arXiv preprint arXiv:2508.09874},
  year={2025}
}
```

## Contact

For questions and support: maximus.cao@outlook.com