metadata
license: apache-2.0
datasets:
- HuggingFaceTB/smollm-corpus
base_model:
- HuggingFaceTB/SmolLM-135M
pipeline_tag: text-generation
Research Paper "Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs"
Usage
Please refer to the GitHub repository MHA2MLA. The inference code is still being optimized. You can follow our subsequent work.
Citation
@misc{ji2025economicalinferenceenablingdeepseeks,
title={Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs},
author={Tao Ji and Bin Guo and Yuanbin Wu and Qipeng Guo and Lixing Shen and Zhan Chen and Xipeng Qiu and Qi Zhang and Tao Gui},
year={2025},
eprint={2502.14837},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.14837},
}