|
|
--- |
|
|
license: apache-2.0 |
|
|
pipeline_tag: video-text-to-text |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
# Seeing, Listening, Remembering, and Reasoning: A Multimodal Agent with Long-Term Memory |
|
|
|
|
|
This repository contains a model checkpoint from the M3-Agent framework, which was introduced in the paper [Seeing, Listening, Remembering, and Reasoning: A Multimodal Agent with Long-Term Memory](https://arxiv.org/abs/2508.09736). |
|
|
|
|
|
- π [**Project Page**](https://m3-agent.github.io) |
|
|
- π» [**GitHub Repository**](https://github.com/hyc2026/M3-Agent-Training) |
|
|
|
|
|
<div align=left> |
|
|
<img src="https://github.com/user-attachments/assets/c42e675e-497c-4508-8bb9-093ad4d1f216" width=40%> |
|
|
</div> |
|
|
|
|
|
## Abstract |
|
|
We introduce M3-Agent, a novel multimodal agent framework equipped with long-term memory. Like humans, M3-Agent can process real-time visual and auditory inputs to build and update its long-term memory. Beyond episodic memory, it also develops semantic memory, enabling it to accumulate world knowledge over time. Its memory is organized in an entity-centric, multimodal format, allowing deeper and more consistent understanding of the environment. Given an instruction, M3-Agent autonomously performs multi-turn, iterative reasoning and retrieves relevant information from memory to accomplish the task. To evaluate memory effectiveness and memory-based reasoning in multimodal agents, we develop M3-Bench, a new long-video question answering benchmark. M3-Bench comprises 100 newly recorded real-world videos captured from a robot's perspective (M3-Bench-robot) and 929 web-sourced videos across diverse scenarios (M3-Bench-web). We annotate question-answer pairs designed to test key capabilities essential for agent applications, such as human understanding, general knowledge extraction, and cross-modal reasoning. Experimental results show that M3-Agent, trained via reinforcement learning, outperforms the strongest baseline, a prompting agent using Gemini-1.5-pro and GPT-4o, achieving 6.7%, 7.7%, and 5.3% higher accuracy on M3-Bench-robot, M3-Bench-web and VideoMME-long, respectively. Our work advances the multimodal agents toward more human-like long-term memory and provides insights into their practical design. Model, code and data are available at this https URL |
|
|
|
|
|
## M3-Agent Demo Video |
|
|
Explore M3-Agent's capabilities as a personal assistant in this demo video: |
|
|
|
|
|
[](https://www.youtube.com/watch?v=XUx31cBanfo) |
|
|
|
|
|
## Usage |
|
|
|
|
|
This model is designed to be a component of the larger M3-Agent framework, typically used for tasks related to memory generation or control. It can be loaded using the Hugging Face `transformers` library. |
|
|
|
|
|
For detailed usage within the full M3-Agent pipeline, including processing video and audio inputs to generate memory graphs and performing reasoning tasks, please refer to the [official GitHub repository](https://github.com/hyc2026/M3-Agent-Training). |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
# Assuming this repository contains the 'M3-Agent-Memorization' or a similar Qwen-based checkpoint |
|
|
# If your repository has a different ID, replace it below. |
|
|
model_id = "ByteDance-Seed/M3-Agent-Memorization" # This might need to be adjusted to the actual repo ID |
|
|
|
|
|
# Load tokenizer and model |
|
|
# trust_remote_code=True is often required for custom architectures like Qwen3 |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) |
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_id, |
|
|
torch_dtype=torch.bfloat16, # Use torch.float16 if bfloat16 is not supported by your hardware |
|
|
device_map="auto", |
|
|
trust_remote_code=True |
|
|
) |
|
|
|
|
|
# Set model to evaluation mode |
|
|
model.eval() |
|
|
|
|
|
print(f"Model {model_id} loaded successfully.") |
|
|
print("This model is a component of the M3-Agent framework. For full agent pipeline usage,") |
|
|
print("including video processing for memory generation and control tasks,") |
|
|
print("please refer to the official GitHub repository: https://github.com/hyc2026/M3-Agent-Training") |
|
|
|
|
|
# Example for basic text generation (demonstrates model loading) |
|
|
# The actual use case for this model is within the M3-Agent pipeline, |
|
|
# involving multimodal inputs and structured outputs. |
|
|
# messages = [ |
|
|
# {"role": "user", "content": "Hello! How are you today?"}, |
|
|
# ] |
|
|
# text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
|
# input_ids = tokenizer(text, return_tensors="pt").to(model.device) |
|
|
# |
|
|
# with torch.inference_mode(): |
|
|
# outputs = model.generate(input_ids, max_new_tokens=50, do_sample=True, temperature=0.7) |
|
|
# generated_text = tokenizer.decode(outputs[0][len(input_ids[0]):], skip_special_tokens=True) |
|
|
# print(f" |
|
|
Generated response (general text generation): {generated_text}") |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
If you find this model or the M3-Agent project helpful, please cite the following paper: |
|
|
|
|
|
```bibtex |
|
|
@misc{long2025seeing, |
|
|
title={Seeing, Listening, Remembering, and Reasoning: A Multimodal Agent with Long-Term Memory}, |
|
|
author={Lin Long, Yichen He, Wentao Ye, Yiyuan Pan, Yuan Lin, Hang Li, Junbo Zhao, Wei Li}, |
|
|
year={2025}, |
|
|
eprint={2508.09736}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV} |
|
|
} |
|
|
``` |