File size: 4,059 Bytes
92521ab
 
 
3387237
 
 
 
 
 
92521ab
 
 
 
 
 
abff0ca
 
92521ab
 
 
 
abff0ca
 
 
 
 
 
 
 
 
 
 
 
 
 
3387237
abff0ca
 
 
3387237
92521ab
3387237
92521ab
3387237
92521ab
3387237
92521ab
 
3387237
92521ab
3387237
 
92521ab
 
3387237
6985a4f
3387237
 
 
 
 
 
 
92521ab
 
 
 
 
 
 
 
 
3387237
 
 
92521ab
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
base_model:
- lmms-lab/llava-onevision-qwen2-0.5b-ov
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
metrics:
- accuracy
pipeline_tag: video-text-to-text
tags:
- Action
- Video
- MQA
- multimodal
- MLLMs
- LLaVAction
---

# LLaVAction-0.5B

<div align="center">
<h2>LLaVAction: evaluating and training multi-modal large language models for action recognition
</h2>

[Shaokai Ye](https://yeshaokai.github.io/)<sup>1**</sup>&nbsp; 
[Haozhe Qi](https://people.epfl.ch/haozhe.qi)<sup>1**</sup>&nbsp;

[Alexander Mathis](https://mathislab.org/)<sup>1</sup><sup></sup>&nbsp;
[Mackenzie Weygandt Mathis](https://www.mackenziemathislab.org/mackenziemathis)<sup>1</sup><sup></sup><sup></sup>&nbsp;

<sup>1</sup> EPFL

<sup>**</sup> First authors  <sup></sup> Senior Authors  <sup></sup> Corresponding Author

\[[Paper](https://huggingface.co/papers/2503.18712)\] &nbsp; \[[Project Page](https://mmathislab.github.io/llavaction/)\] &nbsp; \[[Github Repo](https://github.com/AdaptiveMotorControlLab/LLaVAction)\] &nbsp; 

</div>

## Model Description

LLaVAction-0.5B is a multi-modal large language model (MLLM) trained for action recognition. It's based on the Qwen2 language model with a context window of 32K tokens and fine-tuned on the EPIC-KITCHENS-100-MQA dataset. The model takes video input and can answer questions about the actions being performed in the video.  It achieves state-of-the-art performance on the EPIC-KITCHENS-100 Challenge and outperforms GPT-4o by 21 points in accuracy on EPIC-KITCHENS-100-MQA.  It also shows improvements on other action-related video benchmarks such as EgoSchema, PerceptionTest, LongVideoBench, VideoMME and MVBench.

## Paper Abstract

Understanding human behavior requires measuring behavioral actions. Due to its complexity, behavior is best mapped onto a rich, semantic structure such as language. The recent development of multi-modal large language models (MLLMs) is a promising candidate for a wide range of action understanding tasks. In this work, we focus on evaluating and then improving MLLMs to perform action recognition. We reformulate EPIC-KITCHENS-100, one of the largest and most challenging egocentric action datasets, to the form of video multiple question answering (EPIC-KITCHENS-100-MQA). We show that when we sample difficult incorrect answers as distractors, leading MLLMs struggle to recognize the correct actions. We propose a series of methods that greatly improve the MLLMs' ability to perform action recognition, achieving state-of-the-art on both the EPIC-KITCHENS-100 validation set, as well as outperforming GPT-4o by 21 points in accuracy on EPIC-KITCHENS-100-MQA. Lastly, we show improvements on other action-related video benchmarks such as EgoSchema, PerceptionTest, LongVideoBench, VideoMME and MVBench, suggesting that MLLMs are a promising path forward for complex action tasks. Code and models are available at: https://github.com/AdaptiveMotorControlLab/LLaVAction.


## Usage

### Intended Use
The model was trained on EPIC-KITCHENS-100-MQA. It's intended to be used on videos that are similar to EPIC-KITCHENS-100, primarily egocentric videos of human actions.


### Example Code

```python
# ... (Code example from the original model card) ...
```

## Training Details

See Ye et al. (2025) for full training details: [https://huggingface.co/papers/2503.18712](https://huggingface.co/papers/2503.18712)

### Model
- **Architecture**: SO400M + Qwen2
- **Initialized Model**: lmms-lab/llava-onevision-qwen2-0.5b-ov
- **Data**: EPIC-KITCHENS-100-MQA, 2 epochs, full model
- **Precision**: bfloat16


### Hardware & Software
- GPUs: 32 * Nvidia GH-200 (for whole model series training)
- Orchestration: HuggingFace Trainer
- Neural networks:  PyTorch

## Citation

```bibtex
@article{YeQi2025llavaction,
  title={LLaVAction: evaluating and training multi-modal large language models for action recognition},
  author={Ye, Shaokai and Qi, Haozhe and Mathis, Alexander and Mathis, Mackenzie W.},
  journal={arXiv preprint},
  year={2025}
}
```