MUMA-TOM-BENCHMARK / README.md
SCAI-JHU's picture
Create README.md (#1)
8748573 verified
metadata
license: mit
language:
  - en
task_categories:
  - question-answering
tags:
  - Multi-modal
  - Multi-agent
  - Theory_of_Mind
size_categories:
  - n<1K

MuMA-ToM: Multi-modal Multi-Agent Theory of Mind
AAAI 2025 (Oral)

[🏠Homepage] [💻Code] [📝Paper]

MuMA-ToM is the first multi-modal Theory of Mind benchmark designed to evaluate mental reasoning in embodied multi-agent interactions. The benchmark was designed with several key features in mind:

  1. It is factually correct, concise, and readable.
  2. It requires integrating information from multiple modalities to answer the questions.
  3. It tests understanding of multi-agent interactions, including beliefs, social goals, and beliefs about others' goals.

Leaderboard

Here is the leaderboard for MuMA-ToM. Please contact us if you'd like to add your results.

Citation

Please cite the paper if you find it interesting/useful, thanks!

@inproceedings{shi2025muma,
  title={Muma-tom: Multi-modal multi-agent theory of mind},
  author={Shi, Haojun and Ye, Suyu and Fang, Xinyu and Jin, Chuanyang and Isik, Leyla and Kuo, Yen-Ling and Shu, Tianmin},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={39},
  number={2},
  pages={1510--1519},
  year={2025}
}