jyyyyy67 commited on
Commit
0bcbe8a
·
1 Parent(s): 3e502b8
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,3 +1,74 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:97b99073929fda2dcebc91f6afe18ccd0b6260aed06d6be08bcd394ba373d103
3
- size 3048
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: MPCC
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - multi-modal-classification
19
+ - multi-modal-reasoning
20
+ task_ids:
21
+ - visual-question-answering
22
+ ---
23
+
24
+
25
+
26
+ <p align="center">
27
+ <h1 align="center"> MPCC: A Novel Benchmark for Multimodal Planning with Complex Constraints in Multimodal Large Language Models</h1>
28
+ </p>
29
+
30
+ <p align="center">
31
+ <b>
32
+ [<a href="https://github.com/j-yyyyy/MPCC">Github repository</a>]
33
+ </b>
34
+ <br />
35
+ </p>
36
+
37
+ 🌟 The official repository of MPCC.
38
+
39
+ <img src = "assets/fig1.png">
40
+
41
+ ## 🔥News
42
+ - 🔥 **Our work is accepted by ACM MM 2025.**
43
+ - 🔥 **We have release benchmark on \[[🤗HuggingFace](https://huggingface.co/datasets/jyyyyy67/MPCC)\].**
44
+
45
+ ## 💡 Motivation
46
+ Multimodal Planning with Complex Constraints (MPCC) presents a novel benchmark targeting real-world planning scenarios that require models to jointly reason over visual and textual modalities under complex constraints. Despite the progress in multimodal large language models (MLLMs), current benchmarks fall short in evaluating multimodal planning due to three key limitations: (1) **absence of explicit constraint modeling**, (2) **lack of systematic plan evaluation metrics**, and (3) **insufficient task diversity in constraint forms (e.g., spatial, temporal, and budget constraints)**.
47
+ To address these gaps, we introduce MPCC, a comprehensive benchmark designed for multimodal planning under complex and conflicting constraints. MPCC spans diverse planning domains including flight, calendar, and meeting planning, and incorporates multiple constraint types that must be jointly satisfied. We further propose two evaluation metrics—feasibility and optimality—to assess the model’s ability to produce valid and cost-efficient plans.
48
+ We conduct extensive evaluations across 13 state-of-the-art MLLMs (both open- and closed-source), and find that even top-performing models still struggle with explicit constraint satisfaction and multi-step reasoning. Notably, open-source MLLMs exhibit substantial limitations compared to their proprietary counterparts.
49
+ To our knowledge, MPCC is the first benchmark to explicitly target constrained multimodal planning, offering a structured and challenging testbed for future research. We hope MPCC will serve as a foundational resource for advancing constraint-aware, multi-modal planning in large language models.
50
+
51
+
52
+
53
+ ## 🎯 Installation
54
+
55
+ ### 1. Dataset Preparation
56
+ #### Load Dataset from Huggingface
57
+ You can download any of the subsets in one of the difficulties with:
58
+ ```python
59
+ from datasets import load_dataset
60
+
61
+ dataset = load_dataset("jyyyyy67/MPCC", data_files="Flight Planning/flight_plan_easy.parquet")
62
+ dataset = load_dataset("jyyyyy67/MPCC", data_files="Flight Planning/flight_plan_medium.parquet")
63
+ dataset = load_dataset("jyyyyy67/MPCC", data_files="Flight Planning/flight_plan_hard.parquet")
64
+ ...
65
+
66
+ ```
67
+
68
+ ### 2. Evaluation
69
+ We recommend using [VLMEvalKit](https://github.com/open-compass/vlmevalkit) for evaluation, the specific evaluation code IS COMING SOON!
70
+
71
+
72
+ ## 📲 Contact
73
+
74
+ Please create Github issues or email 📧[Yiyan Ji](mailto:jiyiiiyyy@gmail.com) if you have any questions or suggestions.