|
|
--- |
|
|
base_model: |
|
|
- lmms-lab/llava-onevision-qwen2-0.5b-ov |
|
|
language: |
|
|
- en |
|
|
library_name: transformers |
|
|
license: cc-by-nc-sa-4.0 |
|
|
metrics: |
|
|
- accuracy |
|
|
pipeline_tag: video-text-to-text |
|
|
tags: |
|
|
- Action |
|
|
- Video |
|
|
- MQA |
|
|
- multimodal |
|
|
- MLLMs |
|
|
- LLaVAction |
|
|
--- |
|
|
|
|
|
# LLaVAction-0.5B |
|
|
|
|
|
<div align="center"> |
|
|
<h2>LLaVAction: evaluating and training multi-modal large language models for action recognition |
|
|
</h2> |
|
|
|
|
|
[Shaokai Ye](https://yeshaokai.github.io/)<sup>1**</sup> |
|
|
[Haozhe Qi](https://people.epfl.ch/haozhe.qi)<sup>1**</sup> |
|
|
|
|
|
[Alexander Mathis](https://mathislab.org/)<sup>1</sup><sup>†</sup> |
|
|
[Mackenzie Weygandt Mathis](https://www.mackenziemathislab.org/mackenziemathis)<sup>1</sup><sup>†</sup><sup>‡</sup> |
|
|
|
|
|
<sup>1</sup> EPFL |
|
|
|
|
|
<sup>**</sup> First authors <sup>†</sup> Senior Authors <sup>‡</sup> Corresponding Author |
|
|
|
|
|
\[[Paper](https://huggingface.co/papers/2503.18712)\] \[[Project Page](https://mmathislab.github.io/llavaction/)\] \[[Github Repo](https://github.com/AdaptiveMotorControlLab/LLaVAction)\] |
|
|
|
|
|
</div> |
|
|
|
|
|
## Model Description |
|
|
|
|
|
LLaVAction-0.5B is a multi-modal large language model (MLLM) trained for action recognition. It's based on the Qwen2 language model with a context window of 32K tokens and fine-tuned on the EPIC-KITCHENS-100-MQA dataset. The model takes video input and can answer questions about the actions being performed in the video. It achieves state-of-the-art performance on the EPIC-KITCHENS-100 Challenge and outperforms GPT-4o by 21 points in accuracy on EPIC-KITCHENS-100-MQA. It also shows improvements on other action-related video benchmarks such as EgoSchema, PerceptionTest, LongVideoBench, VideoMME and MVBench. |
|
|
|
|
|
## Paper Abstract |
|
|
|
|
|
Understanding human behavior requires measuring behavioral actions. Due to its complexity, behavior is best mapped onto a rich, semantic structure such as language. The recent development of multi-modal large language models (MLLMs) is a promising candidate for a wide range of action understanding tasks. In this work, we focus on evaluating and then improving MLLMs to perform action recognition. We reformulate EPIC-KITCHENS-100, one of the largest and most challenging egocentric action datasets, to the form of video multiple question answering (EPIC-KITCHENS-100-MQA). We show that when we sample difficult incorrect answers as distractors, leading MLLMs struggle to recognize the correct actions. We propose a series of methods that greatly improve the MLLMs' ability to perform action recognition, achieving state-of-the-art on both the EPIC-KITCHENS-100 validation set, as well as outperforming GPT-4o by 21 points in accuracy on EPIC-KITCHENS-100-MQA. Lastly, we show improvements on other action-related video benchmarks such as EgoSchema, PerceptionTest, LongVideoBench, VideoMME and MVBench, suggesting that MLLMs are a promising path forward for complex action tasks. Code and models are available at: https://github.com/AdaptiveMotorControlLab/LLaVAction. |
|
|
|
|
|
|
|
|
## Usage |
|
|
|
|
|
### Intended Use |
|
|
The model was trained on EPIC-KITCHENS-100-MQA. It's intended to be used on videos that are similar to EPIC-KITCHENS-100, primarily egocentric videos of human actions. |
|
|
|
|
|
|
|
|
### Example Code |
|
|
|
|
|
```python |
|
|
# ... (Code example from the original model card) ... |
|
|
``` |
|
|
|
|
|
## Training Details |
|
|
|
|
|
See Ye et al. (2025) for full training details: [https://huggingface.co/papers/2503.18712](https://huggingface.co/papers/2503.18712) |
|
|
|
|
|
### Model |
|
|
- **Architecture**: SO400M + Qwen2 |
|
|
- **Initialized Model**: lmms-lab/llava-onevision-qwen2-0.5b-ov |
|
|
- **Data**: EPIC-KITCHENS-100-MQA, 2 epochs, full model |
|
|
- **Precision**: bfloat16 |
|
|
|
|
|
|
|
|
### Hardware & Software |
|
|
- GPUs: 32 * Nvidia GH-200 (for whole model series training) |
|
|
- Orchestration: HuggingFace Trainer |
|
|
- Neural networks: PyTorch |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@article{YeQi2025llavaction, |
|
|
title={LLaVAction: evaluating and training multi-modal large language models for action recognition}, |
|
|
author={Ye, Shaokai and Qi, Haozhe and Mathis, Alexander and Mathis, Mackenzie W.}, |
|
|
journal={arXiv preprint}, |
|
|
year={2025} |
|
|
} |
|
|
``` |