Improve model card for MALA model
Browse filesThis PR significantly improves the model card for the MALA model. It adds:
- The `pipeline_tag: image-feature-extraction` to ensure discoverability for relevant tasks on the Hub.
- The `library_name: transformers` for better integration and usage guidance.
- The `license: apache-2.0` for clarity on usage terms.
- A direct link to the paper on Hugging Face Papers.
- The paper's abstract for a comprehensive overview.
- A link to the inferred official GitHub repository for easy access to the code and further resources.
- The BibTeX citation for proper academic attribution.
This will greatly enhance the model's information and usability on the Hugging Face Hub.
README.md
CHANGED
|
@@ -1 +1,32 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pipeline_tag: image-feature-extraction
|
| 3 |
+
library_name: transformers
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# MALA: Magnitude-Aware Linear Attention
|
| 8 |
+
|
| 9 |
+
This repository contains models for **MALA (Magnitude-Aware Linear Attention)**, a novel attention mechanism introduced in the paper [Rectifying Magnitude Neglect in Linear Attention](https://huggingface.co/papers/2507.00698) (ICCV 2025 Highlight).
|
| 10 |
+
|
| 11 |
+
MALA addresses a critical issue in Linear Attention by fully incorporating the magnitude information of the Query, enabling the attention score distribution to dynamically adapt and closely resemble that of Softmax Attention, while maintaining linear complexity. This leads to strong performance across a wide range of tasks.
|
| 12 |
+
|
| 13 |
+
## Abstract
|
| 14 |
+
|
| 15 |
+
As the core operator of Transformers, Softmax Attention exhibits excellent global modeling capabilities. However, its quadratic complexity limits its applicability to vision tasks. In contrast, Linear Attention shares a similar formulation with Softmax Attention while achieving linear complexity, enabling efficient global information modeling. Nevertheless, Linear Attention suffers from a significant performance degradation compared to standard Softmax Attention. In this paper, we analyze the underlying causes of this issue based on the formulation of Linear Attention. We find that, unlike Softmax Attention, Linear Attention entirely disregards the magnitude information of the Query. This prevents the attention score distribution from dynamically adapting as the Query scales. As a result, despite its structural similarity to Softmax Attention, Linear Attention exhibits a significantly different attention score distribution. Based on this observation, we propose Magnitude-Aware Linear Attention (MALA), which modifies the computation of Linear Attention to fully incorporate the Query’s magnitude. This adjustment allows MALA to generate an attention score distribution that closely resembles Softmax Attention while exhibiting a more well-balanced structure. We evaluate the effectiveness of MALA on multiple tasks, including image classification, object detection, instance segmentation, semantic segmentation, natural language processing, speech recognition, and image generation. Our MALA achieves strong results on all of these tasks.
|
| 16 |
+
|
| 17 |
+
## Code
|
| 18 |
+
|
| 19 |
+
The official code and other resources for MALA can be found on the [GitHub repository](https://github.com/aldjalkdf/MAViT).
|
| 20 |
+
|
| 21 |
+
## Citation
|
| 22 |
+
|
| 23 |
+
If you find this work useful, please cite the paper:
|
| 24 |
+
|
| 25 |
+
```bibtex
|
| 26 |
+
@inproceedings{fan2024rect,
|
| 27 |
+
title={Rectifying Magnitude Neglect in Linear Attention},
|
| 28 |
+
author={Qihang Fan and Huaibo Huang and Yuang Ai and Ran He },
|
| 29 |
+
year={2025},
|
| 30 |
+
booktitle={ICCV},
|
| 31 |
+
}
|
| 32 |
+
```
|