Can Multimodal LLMs do Visual Temporal Understanding and Reasoning? The answer is No!
Paper
•
2501.10674
•
Published
•
1
image_1
imagewidth (px) 360
4.1k
| image_2
imagewidth (px) 360
4.1k
| label
stringclasses 2
values |
|---|---|---|
Second
|
||
First
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
First
|
||
First
|
||
First
|
||
First
|
||
Second
|
||
Second
|
||
First
|
||
First
|
||
Second
|
||
First
|
||
First
|
||
Second
|
||
First
|
||
First
|
||
First
|
||
First
|
||
First
|
||
First
|
||
Second
|
||
First
|
||
First
|
||
First
|
||
Second
|
||
First
|
||
First
|
||
Second
|
||
First
|
||
First
|
||
Second
|
||
First
|
||
First
|
||
First
|
||
First
|
||
Second
|
||
Second
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
First
|
||
First
|
||
Second
|
||
Second
|
||
First
|
||
First
|
||
Second
|
||
Second
|
||
First
|
||
First
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
Second
|
||
First
|
||
First
|
||
Second
|
||
First
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
First
|
||
Second
|
||
Second
|
||
Second
|
||
First
|
||
Second
|
||
Second
|
||
Second
|
||
First
|
||
First
|
||
Second
|
||
Second
|
||
Second
|
||
Second
|
||
Second
|
The Temporal-VQA dataset is a challenging benchmark designed to evaluate the temporal reasoning capabilities of Multimodal Large Language Models (MLLMs) in tasks requiring visual temporal understanding. It emphasizes real-world temporal dynamics through two core evaluation tasks:-
from datasets import load_dataset
import base64
import requests
import os
from io import BytesIO
# Replace with your OpenAI API key
API_KEY = "YOUR_API_KEY"
def encode_image(image):
buffer = BytesIO()
image.save(buffer, format="JPEG")
return base64.b64encode(buffer.getvalue()).decode('utf-8')
def get_gpt_response(image1, image2, query):
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}
payload = {
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": query},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image1}"}},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image2}"}}
]
}
],
"max_tokens": 512
}
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
return response.json()
### TASK 1
dataset = load_dataset('fazliimam/temporal-vqa', 'temporal_order', split='test')
image1 = encode_image(dataset[0]['image_1'])
image2 = encode_image(dataset[0]['image_2'])
prompt_1 = "Did the event in the first image happen before the event in the second image? Provide your answer in dictionary format: {'Answer':'True or False', 'Reasoning':'Brief explanation of your choice'}"
prompt_2 = "Between these two images, which one depicts the event that happened first? Provide your answer in dictionary format: {'Answer':'First image or Second image', 'Reasoning':'Brief explanation of your choice'}"
response = get_gpt_response(image1, image2, prompt_1)
print(response)
### TASK 2
dataset = load_dataset('fazliimam/temporal-vqa', 'timelapse_estimation', split='test')
image1 = encode_image(dataset[0]['image_1'])
image2 = encode_image(dataset[0]['image_2'])
prompt = "In the given image, estimate the time that has passed between the first image (left) and the second image (right). Choose one of the following options: A. Less than 15 seconds B. Between 2 minutes to 15 minutes C. Between 1 hour to 12 hours D. Between 2 days to 30 days E. Between 4 months to 12 months F. More than 3 years. Provide your answer in dictionary format: {'Answer':'Selected option', 'Reasoning':'Brief explanation of your choice'}"
response = get_gpt_response(image1, image2, prompt)
print(response)
@misc{imam2025multimodalllmsvisualtemporal,
title={Can Multimodal LLMs do Visual Temporal Understanding and Reasoning? The answer is No!},
author={Mohamed Fazli Imam and Chenyang Lyu and Alham Fikri Aji},
year={2025},
eprint={2501.10674},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.10674},
}