Safetensors

Add pipeline_tag to model card metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -1,8 +1,10 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - Wan-AI/Wan2.1-T2V-1.3B
 
 
5
  ---
 
6
  <div align="center">
7
 
8
  ## MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head
@@ -27,9 +29,9 @@ base_model:
27
 
28
  </div>
29
 
30
- MHLA is a **universal high-efficiency** linear attention operator. MHLA can be applied to image classification, image generation, language modeling, and video generation tasks, maintaining performance consistent with Flash Attention while achieving significant speed advantages over Flash Attention under long-sequence conditions. For more details, please refer to our [paper](#).
31
 
32
- This repository is organized into four sub-projects: [`mhla_dit`](mhla_dit), [`mhla_image_classification`](mhla_image_classification), [`mhla_nlp`](mhla_nlp), and [`mhla_videogen`](mhla_videogen). Each corresponds to the experimental code for the four tasks presented in our paper. Each sub-project contains its own README.md with detailed instructions.
33
 
34
  ## Updates
35
  * `[2026.01.12]` 🔥 Our paper is available at [arxiv](https://arxiv.org/abs/2601.07832).
@@ -37,10 +39,10 @@ This repository is organized into four sub-projects: [`mhla_dit`](mhla_dit), [`m
37
 
38
  ## Installation & Usage
39
  Please refer to the README.md files in the following sub-projects for detailed information:
40
- - [mhla_dit](mhla_dit/README.md)
41
- - [mhla_image_classification](mhla_image_classification/README.md)
42
- - [mhla_nlp](mhla_nlp/README.md)
43
- - [mhla_videogen](mhla_videogen/README.md)
44
 
45
  ## Performance & Efficiency
46
 
 
1
  ---
 
2
  base_model:
3
  - Wan-AI/Wan2.1-T2V-1.3B
4
+ license: apache-2.0
5
+ pipeline_tag: text-to-video
6
  ---
7
+
8
  <div align="center">
9
 
10
  ## MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head
 
29
 
30
  </div>
31
 
32
+ MHLA is a **universal high-efficiency** linear attention operator. MHLA can be applied to image classification, image generation, language modeling, and video generation tasks, maintaining performance consistent with Flash Attention while achieving significant speed advantages over Flash Attention under long-sequence conditions. For more details, please refer to our [paper](https://arxiv.org/abs/2601.07832).
33
 
34
+ This repository is organized into four sub-projects: [`mhla_dit`](https://github.com/DAGroup-PKU/MHLA/tree/main/mhla_dit), [`mhla_image_classification`](https://github.com/DAGroup-PKU/MHLA/tree/main/mhla_image_classification), [`mhla_nlp`](https://github.com/DAGroup-PKU/MHLA/tree/main/mhla_nlp), and [`mhla_videogen`](https://github.com/DAGroup-PKU/MHLA/tree/main/mhla_videogen). Each corresponds to the experimental code for the four tasks presented in our paper. Each sub-project contains its own README.md with detailed instructions.
35
 
36
  ## Updates
37
  * `[2026.01.12]` 🔥 Our paper is available at [arxiv](https://arxiv.org/abs/2601.07832).
 
39
 
40
  ## Installation & Usage
41
  Please refer to the README.md files in the following sub-projects for detailed information:
42
+ - [mhla_dit](https://github.com/DAGroup-PKU/MHLA/tree/main/mhla_dit/README.md)
43
+ - [mhla_image_classification](https://github.com/DAGroup-PKU/MHLA/tree/main/mhla_image_classification/README.md)
44
+ - [mhla_nlp](https://github.com/DAGroup-PKU/MHLA/tree/main/mhla_nlp/README.md)
45
+ - [mhla_videogen](https://github.com/DAGroup-PKU/MHLA/tree/main/mhla_videogen/README.md)
46
 
47
  ## Performance & Efficiency
48