Datasets:
File size: 1,196 Bytes
ace65d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
license: mit
task_categories:
- token-classification
language:
- en
tags:
- TAM
- CAM
- MLLM
- VLLM
- Explainability
pretty_name: TAM
size_categories:
- 1B<n<10B
---
# Token Activation Map to Visually Explain Multimodal LLMs
We introduce the Token Activation Map (TAM), a groundbreaking method that cuts through the contextual noise in Multimodal LLMs. This technique produces exceptionally clear and reliable visualizations, revealing the precise visual evidence behind every word the model generates.
# Evaluation Datasets
This is a dataset repo to evaluate TAM. The involved datasets are formatted for easy useage.
# Paper and Code
[](https://arxiv.org/abs/2506.23270)
[🐙 GitHub Page](https://github.com/xmed-lab/TAM)
## Citation
```
@misc{li2025tokenactivationmapvisually,
title={Token Activation Map to Visually Explain Multimodal LLMs},
author={Yi Li and Hualiang Wang and Xinpeng Ding and Haonan Wang and Xiaomeng Li},
year={2025},
eprint={2506.23270},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.23270},
}
``` |