diff --git "a/intro_28K/test_introduction_long_2405.03690v2.json" "b/intro_28K/test_introduction_long_2405.03690v2.json" new file mode 100644--- /dev/null +++ "b/intro_28K/test_introduction_long_2405.03690v2.json" @@ -0,0 +1,104 @@ +{ + "url": "http://arxiv.org/abs/2405.03690v2", + "title": "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs", + "abstract": "Recent advancements in Large Language Models (LLMs) have led to the\ndevelopment of Video Large Multi-modal Models (Video-LMMs) that can handle a\nwide range of video understanding tasks. These models have the potential to be\ndeployed in real-world applications such as robotics, AI assistants, medical\nsurgery, and autonomous vehicles. The widespread adoption of Video-LMMs in our\ndaily lives underscores the importance of ensuring and evaluating their robust\nperformance in mirroring human-like reasoning and interaction capabilities in\ncomplex, real-world contexts. However, existing benchmarks for Video-LMMs\nprimarily focus on general video comprehension abilities and neglect assessing\ntheir reasoning capabilities over complex videos in the real-world context, and\nrobustness of these models through the lens of user prompts as text queries. In\nthis paper, we present the Complex Video Reasoning and Robustness Evaluation\nSuite (CVRR-ES), a novel benchmark that comprehensively assesses the\nperformance of Video-LMMs across 11 diverse real-world video dimensions. We\nevaluate 9 recent models, including both open-source and closed-source\nvariants, and find that most of the Video-LMMs, especially open-source ones,\nstruggle with robustness and reasoning when dealing with complex videos. Based\non our analysis, we develop a training-free Dual-Step Contextual Prompting\n(DSCP) technique to enhance the performance of existing Video-LMMs. Our\nfindings provide valuable insights for building the next generation of\nhuman-centric AI systems with advanced robustness and reasoning capabilities.\nOur dataset and code are publicly available at:\nhttps://mbzuai-oryx.github.io/CVRR-Evaluation-Suite/.", + "authors": "Muhammad Uzair Khattak, Muhammad Ferjad Naeem, Jameel Hassan, Muzammal Naseer, Federico Tombari, Fahad Shahbaz Khan, Salman Khan", + "published": "2024-05-06", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "Recently, Large Language Models (LLMs) [Touvron et al., 2023, Zheng et al., 2023, Jiang et al., 2024] have demonstrated impressive reasoning and planning capabilities while simultaneously handling a wide range of NLP tasks [Wei et al., 2022a, Brown et al., 2020]. Consequently, their integration with the vision modality, specifically for video understanding tasks, has given rise to Video Large Multi-modal Models (Video-LMMs) [Li et al., 2023b]. These models act as visual chatbots that accept both text and video as input and handle a diverse set of tasks, including video comprehension [Maaz et al., 2023], detailed video understanding [Lin et al., 2023], and action grounding [Zhang et al., 2023]. As these models directly capture video data, they hold substantial potential for deployment in real-world applications such as robotics, surveillance, medical surgery, and autonomous vehicles. However, as these models assume an expanding role in our everyday lives, assessing their performance in comprehending complex videos and demonstrating reliable reasoning and robustness capabilities arXiv:2405.03690v2 [cs.CV] 8 May 2024 Benchmark Textual Complex In the wild Contextual Multiple Temporal Order Robustness Reasoning (OOD) Dependency Actions & Fine-grained MSVD-QA [Xu et al., 2017] MSRVTT-QA [Xu et al., 2017] TGIF-QA [Jang et al., 2017] Activity Net-QA [Yu et al., 2019] VideoChat-GPT [Maaz et al., 2023] MVBench [Li et al., 2023c] SEED-Bench [Li et al., 2023a] CVRR-ES (ours) Table 1: Comparison of CVRR-ES with existing bench- marks for video QA. The CVRR-ES benchmark repre- sents an initial effort to assess Video-LMMs in the context of their applicability and suitabil- ity in real-world applications. Non-existent actions with non-existent scene depictions. 6.0% Multiple actions in a single video. 13.25% Fine-grained action understanding. 9.58% Partial actions. 8.58% Non-existent actions with existent scene depictions. 5.75% Interpretation of visual context. 11.38% Continuity and Object Instance Count. 7.38% Unusual and Physically Anomalous activities. 7.92% Interpretation of social context. 11.67% Understanding of emotional context. 12.17% Time order understanding. 6.33% CVRR Evaluation Suite 0 20 40 60 80 100 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro GPT4V(ision) Human Video LMMs 15.92% 16.41% 16.46% 21.62% 24.96% 25.78% 32.89% 53.2% 70.78% 96.67% Figure 1: Left: CVRR-ES comprises of 11 diverse complex video evaluation dimensions encompass- ing a variety of complex, real-world contexts. Right: Overall performance of Video-LMMs on the CVRR-ES benchmark. Results for each Video-LMM are averaged across 11 video dimensions. across diverse real-world contexts becomes essential. Video-LMMs with such capabilities will be more effective when integrated into our daily lives for solving perception tasks and will be a promising step towards building human-centric AI-assistive systems. Several attempts in literature have been made to benchmark Video-LMMs. SEED-Bench [Li et al., 2023a] curated a MCQ-based benchmarking dataset including 3 evaluation dimensions for videos. Similarly, MV-Bench [Li et al., 2023c] constructed the Video-LMM benchmark and assembled 20 challenging video tasks for evaluating the spatial and temporal understanding of these models. While these methods aim at benchmarking Video-LMMs, they predominantly evaluate video and/or temporal comprehension abilities and overlook the complex reasoning aspects of Video-LMMs for real-world context, and their robustness towards user input text queries; both of which are crucial to ensure their responsible engagement with humans in various real-world situations in the wild. While some studies have explored similar areas such as hallucinations in image-based LLMs [Liu et al., 2023a, Qian et al., 2024], no such comprehensive study exists for the case of Video-LMMs. Motivated by the wide-scale applications of Video-LMMs and the lack of world-centric complex video benchmarking efforts, we present a new benchmark, Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES), to comprehensively assess the performance of Video-LMMs. As shown in Tab. 1, CVRR-ES evaluates Video-LMMs on key aspects of robustness and reasoning in videos, encompassing video domains that more accurately test models in real-world scenarios such as videos having contextual dependency and in-the-wild aspects. CVRR-ES is an open-ended video QA benchmark comprising 11 real-world video category dimensions (Fig. 1, left) that encompass diverse evaluation aspects. These dimensions span from context-dependent (e.g., social, emotional, etc.) categories to ones that often take place in the wild such as videos containing physically anomalous activities. We comprehensively evaluate a representative set of 9 recent Video-LMMs (Fig. 1, right) including both open-source and closed-source models on the CVRR-ES benchmark using a LLM-assisted automatic evaluation framework [Maaz et al., 2023, Cai et al., 2023]. The performance of Video-LMMs on the CVRR-ES benchmark reveals that these models struggle to correctly comprehend complex videos indicating their weak reasoning and lack of robustness to the textual user queries (Fig. 2). For instance, state-of-the-art Video-LLaVA [Lin et al., 2023] achieves only 15.92% performance averaged across 11 video dimensions of CVRR-ES. In contrast, closed-source models including GPT4V(vision) [OpenAI, 2023] and Gemini-Vision-Pro [Google, 2023] exhibit relatively stronger performance but still lag behind the performance of humans. Using CVRR-ES benchmark, we extensively perform quantitative and qualitative analysis formulating important insights into these Video-LMMs based on their failure cases and individual performances across the diverse video dimensions. 2 Based on the visual cues provided in the sequence of images, it appears that four individuals exit the vehicle. Correct Response: Three people exit the car to retrieve black bags from the trunk. 2. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. In the video, we can see a young man and woman exiting the car \u2026 How many individuals exit the vehicle to retrieve items from the trunk? Based on \u2026, one person is performing a prank on another individual. The prank seems to involve pouring a yellow liquid from a bowl, which is sneakily positioned behind the unsuspecting person's back at waist level. Correct Response: one man uses his back muscles to crack eggs placed by another man into a bowl behind him. The process is amusing and ends with a celebratory turn by the man holding the eggs. \u2026 Another man is holding a bowl of food behind the first man \u2026 the second man appears to be pouring the contents of the bowl down the first man's shorts. A man is seen standing in a kitchen with his hands on his hips. Another man is standing behind him and pouring something into a bowl. a guy is standing with his back facing someone else. The person standing behind him pours water on his back, causing him to shiver. In the video, a man is seen standing in a kitchen and washing dishes. He then puts on a white robe and continues washing dishes. In the video, we see a young man pouring water from a bucket to a plastic bowl. What is happening in the video? Why is the golfer celebrating in the video? Based on \u2026 the golfer is celebrating because they successfully hit the golf ball from a sand bunker onto the green and possibly close to the hole\u2026 Correct Response: The golfer is not celebrating and instead he is expressing his anger and frustration by hitting the golf stick into the ground. The golfer is celebrating because he made a hole-in-one. A hole-in-one is \u2026 The golfer is celebrating because he hit the ball into hole on his first attempt. The golfer is celebrating because he has just hit the ball into the hole. The golfer is celebrating because he has just made a hole in one, \u2026 The golfer is celebrating in the video because he is successfully teeing off and \u2026 Before the mug was moved, a tennis ball became visible underneath it. Correct Response: No other object was visible before the mug was moved in the video. A yellow tennis ball was visible in the scene before the mug was moved. A tennis ball. Before the mug was moved, a tennis ball was visible in the scene. In the beginning of the video, the person was holding a mug with a tennis ball on a table. Before the mug was moved, a white vase was visible in the scene. Before the mug was moved, what other object was visible in the scene? A tennis ball was visible in the scene before the mug was moved. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Figure 2: We observe that most Video-LMMs struggle to reason over complex videos (rows 1-3) and exhibit weak robustness and rectification capabilities when prompted to generate answers for user questions that can sometimes be confusing (row 4). The QA pairs in Comprehensive Video Reasoning and Robustness Evaluation Suite (CVRR-ES) benchmark assess the performance of Video-LMMs beyond general video comprehension. Based on our analysis, we observe that standard prompting of Video-LMMs struggles in steering their focus for complex video understanding. Additionally, their limitations in reasoning and robust video understanding of real-world scenarios are dominantly driven by the quality of textual inputs (i.e., user questions). Based on these insights, we develop a training-free Dual-Step Contextual Prompting (DSCP) technique, which effectively steers the model\u2019s behavior during inference to elicit video-specific reasoning and improved robustness within Video-LMMs. With DSCP, Video-LMMs show substantial improvements on our benchmark, suggesting the potential of prompting techniques for Video-LMMs. Our main contributions can be summarised as follows: \u2022 We present the Complex Video Robustness and Reasoning Evaluation suite (CVRR-ES), a Video Question Answering benchmark designed to assess the reasoning and robustness capabilities of Video-LMMs across 11 diverse world-centric complex video dimensions. \u2022 We comprehensively evaluate both open-source and closed-source Video-LMMs on the CVRR-ES benchmark and find that most models exhibit weak performance, highlighting their limited reasoning in complex videos and lack of robustness towards user text queries. \u2022 We conduct extensive analysis and formulate important conclusions about Video-LMMs based on their failure cases and performance on the CVRR-ES benchmark. Our findings provide valuable insights for building the next generation of human-centric AI systems with improved robustness and reasoning capabilities. \u2022 To improve Video-LMMs\u2019 reasoning and robustness abilities, we formulate a model-agnostic and training-free prompting technique that effectively enhances their performance. 3", + "main_content": "Video Large Multi-modal models (Video-LMMs). Video-LMMs [Lin et al., 2023, Li et al., 2023d, Zhang et al., 2023] are advanced visual chatbots capable of performing a wide range of video understanding tasks, including video comprehension and captioning, video question-answering, and action grounding. These models accept both video and textual inputs and generate textual responses. From an architectural perspective, Video-LMMs typically combine pre-trained vision backbones [Radford et al., 2021, Fang et al., 2023, Wang et al., 2022b] with large language models [Touvron et al., 2023, Zheng et al., 2023] using connector modules such as MLP adapters, Q-former [Dai et al., 2023], and gated attention [Alayrac et al., 2022]. VideoChat [Li et al., 2023b] and VideoChat-GPT [Li et al., 2023d] presented initial open-source efforts in this direction and were trained with two stages of alignment and video-instruction following objectives. Recently, more advanced Video-LMMs have emerged in the field, with some models focusing on improving model architectures [Li et al., 2023d], expanding to new tasks [Munasinghe et al., 2023], and enabling support for long videos [Song et al., 2023, Ren et al., 2023]. In this work, we aim to develop a comprehensive benchmarking evaluation framework to assess the reasoning and robustness capabilities of Video-LMMs and develop a training-free prompting technique to improve their performance on these fronts. Benchmarking Video-LMMs. With the growing number of Video-LMMs emerging in the research community, several works have presented evaluation frameworks to assess and quantify these models for benchmarking and analysis purposes. SEED-Bench [Li et al., 2023a] evaluates the visual capabilities in both image and Video-LMMs across 12 unique dimensions. MV-Bench [Li et al., 2023c] curates 20 challenging video tasks to evaluate spatial and temporal understanding of VideoLMMs. Video-ChatGPT [Maaz et al., 2023] develops a quantitative evaluation framework to assess model understanding across five aspects of general video comprehension, such as the correctness and consistency of model captions. While these evaluation frameworks provide effective insights, their assessments do not extend beyond general video-comprehension metrics to more advanced aspects of reasoning and robustness, particularly for real-world context cases. In contrast, our work focuses on providing a complex video reasoning and robustness benchmark across 11 diverse real-world-centric evaluation types and offers a more thorough assessment of Video-LMMs in practical applications. Training-free Prompting Techniques. Steering model behavior at inference time using prompting has become a common paradigm in the NLP domain. Prompting [Wei et al., 2022b, Wang et al., 2022a] refers to the set of instructions given as a prefix to the language model to better align model responses with human intent without the need for task-specific fine-tuning. Prompting techniques can be as simple as a single sentence (e.g., \"Let\u2019s think step by step\") such as zero-shot chain of thought [Wei et al., 2022b] prompting, to more detailed techniques such as combining chain-ofthought prompting with few-shot learning [Brown et al., 2020] and self-consistency chain of thought prompting [Wang et al., 2022a]. Surprisingly, training-free prompting techniques for Video Large Multi-modal Models (Video-LMMs) have been minimally explored. In this work, we develop a dual-step prompting technique based on principled prompt instructions specifically designed to steer the model\u2019s behavior for improved reasoning and robustness over complex videos. 3 Complex Video Reasoning and Robustness Evaluation Suite As Video-LMMs are touching new real-world applications, it is essential to ensure that they robustly handle the user inputs, comprehend the visual world, and exhibit human-like reasoning capabilities. In this work, our goal is to establish a comprehensive benchmark that specifically assess the robustness and reasoning capabilities of Video-LMMs in a variety of complex and contextual videos covering diverse scenarios. To this end, we present Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES). We first provide a holistic overview of CVRR-ES benchmark below and detail the video evaluation dimensions in Sec. 3.1. Subsequently, we present the CVRR-ES creation process in Sec. 3.2. We provide details on the dataset quality and human evaluation in Appendix B. Overview of CVRR-ES Benchmark. CVRR-ES encompasses evaluation dimensions that cover diverse video categories related to real-world scenarios, ranging from context-dependent (e.g., social, emotional) categories to video types that often take place in the wild (e.g., anomalous activities). Specifically, we have compiled 11 video evaluation dimensions and curated 2,400 high-quality openended question-answer (QA) pairs, spanning 217 high-quality videos. The average video duration is 22.3 seconds, with maximum and minimum durations of 183 and 2 seconds, respectively. In Fig. 4 Figure 3: CVRR-ES Benchmark Statistics. Left: Frequency distribution of the type of questions. Right: Illustration of the most frequent keywords in the answer-set of CVRR-ES benchmark. 3 (left), we quantify the distribution of different question types present in our benchmark. This diverse set of questions aims to comprehensively capture the model\u2019s answering capabilities based on reasoning and robustness criteria. We show the word cloud plot based on the frequency of key words in the answer set of CVRR-ES in Fig. 3 (right). The frequent words correspond to objects and attributes with which Video-LMMs could most likely interact when deployed in practical scenarios. 3.1 CVRR-ES Video Category definitions. To assess the robustness and reasoning capabilities of Video-LMMs in the CVRR-ES benchmark, we carefully curate 11 diverse benchmark evaluation categories. As shown in Fig. 1 (left), these categories encompass a wide range of real-world complex and contextual videos within each category. Below, we define each video evaluation dimension of the CVRR-ES benchmark in detail. 1) Multiple actions in a single video. This category includes videos that contain multiple activities within a single video. The number of activities varies from 2 to 4 in these videos, mostly featuring humans performing multiple activities. We curate QA pairs in this category aiming to identify whether the model can reason over challenging questions concerning multiple actions and understand the interrelation between different actions within a video. 2) Fine-grained action understanding. We gather video samples with fine-grained actions. These actions encompass various fine-grained activities performed by humans, including pushing, opening, closing, spreading, sitting, etc. This category presents a challenge to the model\u2019s comprehension of subtle and fine-grained actions through carefully crafted questions. 3) Partial actions. Based on our observations that Video-LMMs predominantly generate content that may be contextually relevant and likely to co-occur with the depicted scene in the video, we compile videos featuring actions that have a high probability of being followed by subsequent actions but are not executed in the video. For instance, an action such as cracking an egg in a kitchen setting often anticipates the subsequent action of frying/cooking the egg. 4) Time order understanding. Accurately recognizing the temporal sequence of activities in videos is crucial for distinguishing between atomic actions, such as pushing and pulling. We collect videos of fine-grained actions occurring in a particular temporal direction and curate challenging questions. 5) Non-existent actions with existent scene depictions. This category examines the model\u2019s robustness and reasoning behavior in scenarios where we introduce non-existent activities into the video without altering the physical and spatial scenes or environmental details in it. 6) Non-existent actions with non-existent scene depictions. In this evaluation category, we make the QA task more challenging by creating questions that include both non-existent activities and non-existent scene comprehension. Non-existent scene comprehension involves changing the objects, attributes of objects, and background scene description. This evaluates the model\u2019s reliability to correct misleading questions and avoid generating imaginary content. 7) Continuity and object instance count. This category contains videos (both real and simulations) designed to test the models\u2019 ability to accurately recognize the number of instances of objects, people, etc., and distinguish between existing objects and new ones introduced in the same video scene. 8) Unusual and physically anomalous activities. This category consists of videos with unconventional activities and physical phenomena that seemingly defy the laws of physics. We meticulously 5 collect relevant videos from various sources on the internet, focusing on capturing unusual activities such as a person floating in the air or driving a motorbike on a running river. We believe that assessing Video-LMMs in such scenarios is crucial, as it allows us to determine whether they can generalize to understand actions in out-of-distribution videos that can occur in practical situations. 9) Interpretation of social context. In the real world, human actions are often influenced by social context in their surroundings. For instance, a person might be helping an elderly individual cross the road. This category evaluates Video-LMMs on such scenarios to determine their ability to accurately infer the rationale behind actions based on the depicted social context. We gather diverse videos from the internet and create challenging questions that encompass the social context dimension. 10) Understanding of emotional context. Similar to social context, humans can accurately understand and interpret each other\u2019s actions by considering the emotional context. For example, a person being emotionally moved and crying in a gathering could be a happy moment if it is one stemming from success/joy. We collect videos and curate challenging reasoning questions aimed at recognizing the nature of actions solely based on emotional context for evaluating Video-LMMs. 11) Interpretation of visual context. This dimension focuses on assessing the model\u2019s reasoning abilities to recognize the actions by leveraging the overall visual contextual cues in the video. We curate specific videos containing actions where activity identification and reasoning require visual contextual cues. For example, to identify the number of people present based on the presence of shadows, one must utilize the visual context from the shadows to reason about the question. Qualitative Examples. Fig. 2 shows examples of collected videos for the CVRR-ES benchmark. The curated videos are carefully selected to be diverse and contain rich spatio-temporal content, aligned with the proposed video evaluation dimensions. 3.2 Building CVRR-ES Benchmark After defining the video evaluation dimensions, we now proceed toward building the CVRR-ES benchmark which consists of three stages. We present each stage in detail below. Stage 1: Data collection and Annotation. We first collect high-quality videos and annotate each video using human assistance. To ensure that each evaluation dimension captures the relevant attributes and information, we meticulously select videos that are representative of specific characteristics associated with that dimension. Across the 11 dimensions, 214 unique videos are selected for the benchmark with around 20 videos per evaluation category. Around 60% of these videos are collected from public academic datasets. To introduce diversity in the benchmark distribution, we incorporate video samples from multiple academic datasets including Something-Something-v2 [Goyal et al., 2017], CATER [Girdhar and Ramanan, 2020], Charades [Sigurdsson et al., 2016], ActivityNet [Caba Heilbron et al., 2015], HMDB51 [Kuehne et al., 2011], YFCC100M [Thomee et al., 2016]. The remaining 40% of videos are collected from the internet. Following the video collection process, two experienced human annotators are assigned to generate captions for each video. For videos where initial captions or metadata are available from academic datasets, the captions are generated by the annotators based on them. For videos collected from the internet, captions are entirely generated by human annotators. To ensure consistency and high quality, we provide annotation instructions to annotators, who generate captions accordingly. Personalized annotation guidelines are used for each video category. Refer to additional details in Appendix B. Stage 2: Question-Answer Generation. The first challenge is to select an evaluation setting to assess Video-LMMs. Humans typically engage in free-form conversation to interact with each other in day-to-day life. Inspired by this, we aim to simulate a similar style of interaction with Video-LMMs by curating open-ended QA pairs to evaluate these models for robustness and reasoning. We feed detailed ground-truth video captions to GPT-3.5 LLM, which are utilized to generate open-ended questions covering both reasoning and robustness aspects. Reasoning QA pairs: With Video-LMMs beginning to interact more directly with humans in our lives, it\u2019s crucial to validate the reasoning abilities of Video-LMMs for more reliable Human-AI interaction. When evaluating the reasoning capabilities of Video-LMMs, we aim to determine whether these models can understand the input video not only by analyzing spatial content but also by grasping the underlying rationale behind the occurring activities and their relationships with the surrounding context. This involves creating questions that go beyond simple video comprehension and scene 6 description and require the model to engage in complex logical inference, contextual understanding, and reasoning about counterfactual and hypothetical scenarios. Robustness QA pairs: In addition to evaluating the reasoning capabilities of LLMs, it is important to assess Video-LMMs to ensure their robust and responsible performance in real-world scenarios. In the context of Video-LMMs, robustness can be evaluated from both visual (video input) and textual interfaces. Our focus in this work lies on textual interface robustness by particularly testing the model\u2019s comprehension when posed with misleading or confusing questions. This scenario mirrors realistic situations where users, based on their expertise levels, may pose irrelevant, misleading, or confusing questions. It is crucial for models to demonstrate reliability and robustness in handling such queries and avoid generating unreal or hallucinated content for input videos. We curate specific prompts for each evaluation dimension to instruct LLM in generating QA pairs. Example prompts used as an instruction to LLMs for curating QA pairs for robustness and reasoning aspects are provided in Fig. 14 in the Appendix D. Stage 3: QA Pairs Filtration. After generating QA pairs, a manual filtration step is employed, with human assistance to verify each generated QA pair. Approximately 30% of the QA pairs generated by GPT-3.5 are found to be noisy, containing questions that are unrelated to the video evaluation dimensions or unanswerable based on the provided ground-truth captions. Additionally, many questions contain answers within the question itself. Therefore, an exhaustive filtering process is conducted which involves QA rectification and removing those samples which are not relevant to the video or evaluation type. This process results in a final set of 2400 high-quality QA pairs for the CVRR-ES benchmark. Examples of QA pairs are shown in Tab. 4 in the Appendix. Stage 4: Evaluation Procedure. Previous methods in the literature [Maaz et al., 2023, Cai et al., 2023, Liu et al., 2023a, Qian et al., 2024] have explored using LLM models as judges for quantifying results in open-ended QA benchmarks. We adopt a similar approach and instruct LLMs to act as teachers to assess the correctness of predicted responses from Video-LMMs compared to ground-truth answers. We generate open-ended predictions from Video-LMMs by providing video-question pairs as inputs and then present the model predictions and their corresponding ground-truth responses to the LLM Judge alongside the evaluation prompt. The Judge determines whether the prediction is correct or incorrect through a binary judgment, assigns a score from 1 to 5 representing the quality of the prediction, and provides a reasoning to explain its decision. Our ablative analysis in the Appendix. D demonstrates that reasoning-constrained LLM-based evaluation aligns well with human-based judgment. The evaluation prompt is shown in Fig. 13 in the Appendix D. 4 Dual-Step Contextual Prompting for Video-LMMs. Given their wide-scale potential in practical downstream applications, new Video-LMMs are frequently introduced by the research community. Despite the availability of numerous Video-LMMs, the majority of them are trained using only positive examples and video-conversational templates that are primarily limited to tasks such as video-captioning and video question answering. This leads to highly over-affirmative behavior and a lack of self-rectification abilities in these models (Sec. 5.4). Dual Step Contextual Prompting for Video-LMMs Retrieving Contextual reasoning information (Step 1) As an intelligent video comprehension model, focus on these guidelines: 1. Differentiate recurring objects, count accurately, and identify movements and poses. 2. Understand directional movements and temporal order. 3. Pay attention to fine-grained actions with precision. 4. Assess incomplete actions without assuming completion. 5. Detect emotional, social, and visual cues. 6. Capture and analyze all relevant actions. 7. Identify unusual actions accurately. 8. Disagree with incorrect information given in question. 9. If you do not find the evidence in the frames, you can give a definite answer by assuming that the asked action/attribute is not present. 10. Provide to the point and concise response. Now, proceed with answering the following question faithfully while keeping above guidelines in mind: Question: What is happening in the video? Context conditioned question-answering (Step 2) Context for the given video is: {step 1 response}. Now answer a question truthfully based on the video and the provided context. Question: {User question} Figure 4: Principled prompt instructions in our DSCP method for improving reasoning and robustness in Video-LMMs. Additionally, the templates have minimal focus on enhancing reasoning and robustness capabilities through reasoning-based instruction-tuning pairs, resulting in weak performance of such models against robustness and reasoning QA evaluations in the CVRR-ES benchmark. Furthermore, curating reasoning-based instruction fine-tuning datasets requires meticulous data curation steps, and retraining these models is computationally expensive [Li et al., 2023d, Ren et al., 2023]. Alternatively, training-free prompting techniques in NLP literature have shown effectiveness in eliciting reasoning abilities in LLMs such as chain of thought and self-consistency prompting [Wei et al., 2022b, Wang et al., 2022a]. Inspired by these approaches, we introduce a prompting technique called Dual Step Contextual Prompting (DSCP), which aims to steer VideoLMM focus for enhanced reasoning while simultaneously encouraging the models to provide robust and 7 Video-LLaVA LLaMA-VID MovieChat How many different wigs with different colors did the young girl try before settling on the black one? I can see the young girl wearing a black wig, and a brunette wig is in the background. The girl is wearing a pink shirt in the beginning of the video The young girl tried on three different wigs with different colors before settling on the black one. The young girl tried on three different wigs with different colors before settling on the black one. Improved video reasoning and robustness with our DSCP Method The video does not provide information about the number of different wigs the young girl tried before settling on the black one. I cannot give an accurate answer as the video does not show the different wigs that the young girl tried. The video does not provide information about the number of wigs the young girl tried before settling on the black one Is the atmosphere in the dressing room tense and uncomfortable as the girl receives her new hair? Yes, the atmosphere in the dressing room is tense and uncomfortable as the girl receives her new hair. The video does not provide any information about the atmosphere in the dressing room. The video does not provide any information about the atmosphere in the dressing room. However, it is possible \u2026 Improved video reasoning and robustness with our DSCP Method No, the atmosphere in the dressing room is not tense and uncomfortable. The woman brushing the hair is smiling, and the woman in the chair is smiling as \u2026 \u2026, but the video does not suggest that the girl is uncomfortable or tense while receiving her new hair. No, the atmosphere in the dressing room is not tense or uncomfortable. The girl is laughing and smiling \u2026, indicating that she is enjoying the experience. Can we see visitors petting the white tiger through the cage bars? Sure, in this video we can see a baby panda inside a cage with bars. There are also other \u2026 Yes, we can see visitors petting the white tiger through the cage bars in the video. Yes, we can see visitors petting the white tiger through the cage bars. Improved video reasoning and robustness with our DSCP Method No, we cannot see visitors petting the white tiger through the cage bars in the video. No, we do not see visitors petting the white tiger through the cage bars in the video. No, we cannot see visitors petting the white tiger through the cage bars in the video. Does the chimpanzee use tools to repair the zoo vehicle in the video? Yes, the chimpanzee in the zoo is seen using tools to fix the zoo vehicle. Yes, the chimpanzee is seen using a tool to fix the vehicle in the video. Yes, the chimpanzee is seen using a tool to repair the zoo vehicle in the video. Improved video reasoning and robustness with our DSCP Method No, the chimpanzee does not use tools to repair the zoo vehicle in the video. \u2026 the chimpanzee driving the car \u2026 The chimpanzee does not use tools to repair the zoo vehicle in the video. It can be seen driving the car \u2026 No, the chimpanzee does not use tools to repair the zoo vehicle in the video. Figure 5: Qualitative results of DSCP prompting method. Using our DSCP approach, Video-LMMs demonstrate enhanced robustness and reasoning capabilities over complex videos. grounded answers. DSCP is a two-step prompting method that 1) ensures that the model comprehends the video while reasoning over crucial aspects of complex video understanding such as contextual information and decoding the complex relationships between objects and motions, etc., and 2) encourages robustness by generating the response against the question while conditioning both on video and the context retrieved in the first step. Below we discuss each step of DSCP in detail. Step 1: Reasoning over the video. We first guide Video-LMMs using principled prompts to interpret video content from a reasoning perspective. As shown in Fig. 4 (in blue), we formulate ten principled reasoning-based instructions for prompting, Preason, which directs Video-LMMs to not only comprehend the general video content but also steers them to reason over the rationale behind occurring activities and their relationships with the surrounding context. These prompt instructions include specific considerations like contextual priors, the temporal order of actions, instance count, and attributes. Additionally, the prompting technique incorporates instructions to ensure conciseness and factuality, aiming to mitigate hallucinations. Given a Video-LMM F and input video V, we retrieve contextual reasoning information Icontext by providing principled reasoning prompt Preason along with the video to the LMM, Icontext = F(Preason|V). The contextual information is utilized in the second step of DSCP to generate a more grounded response to the user question. Step 2: Context conditioned question answering. As discussed earlier, Video-LMMs are primarily trained with positive examples to answer questions, with limited emphasis on reasoning and robustness aspects. Consequently, enabling direct interaction of Video-LMMs with users in real-world scenarios can result in undesired responses when the user question is confusing and deceiving due to their extreme over-affirmative behavior. To address these challenges, we propose incorporating an additional inference step in Video-LMMs before answering the user\u2019s question. We note that Video-LMMs often possess factual knowledge about the video content but may become distracted and produce hallucinations when prompted with confusing or misleading questions (more details in Appendix C). Specifically, we devise a prompting method that conditions the model to first comprehend the video in detail without attending to the user question, thereby eliminating the influence of the question. The complex video comprehension information refers to Icontext formulated in step 1. Subsequently, we pose the user question in the second step using prompt Puser which combines user question and the contextual reasoning information (Fig. 4, in green) while conditioning the model on both the video and the contextual reasoning information Icontext. Concretely, Final response = F(Puser|V), where Puser = [question; Icontext]. 8 Table 2: Evaluation results of Video LLMs across various video-evaluation categories on the CVRR-ES benchmark. We present results for both open-source and closed-source models, alongside human evaluation results which serves as the upper bound on the benchmark. Benchmark Category Video-LLaMA-2 VideoChat Video-ChatGPT Video-LLaVA MovieChat LLaMA-VID TimeChat Gemini-V Pro GPT4V Human Multiple Actions in 16.98 23.90 27.67 15.72 12.58 17.92 28.30 43.08 57.55 93.40 single video. Fine-grained action 29.57 33.48 26.96 25.22 23.48 26.09 39.13 51.61 77.39 95.65 understanding. Partial 24.76 33.01 22.82 13.59 21.36 14.56 49.51 67.48 73.79 98.54 actions. Time order 16.45 31.58 27.63 21.05 16.45 19.74 34.21 45.39 57.89 97.37 understanding. Non-existent actions with 10.14 15.22 23.19 5.07 5.07 2.90 23.19 57.25 71.01 97.10 existent scene. Non-existent actions with 13.19 14.58 17.36 3.47 11.81 6.94 13.89 49.64 75.00 100.00 non-existent scene. Continuity and Object 28.25 24.29 28.41 21.47 19.77 24.86 34.46 36.16 62.71 96.49 instance Count. Unusual and Physically 18.95 18.42 18.95 15.79 17.89 16.32 27.37 60.00 74.74 96.84 Anomalous activities. Interpretation of 25.00 31.07 32.50 18.93 17.14 13.93 39.29 64.29 79.64 97.51 social context. Understanding of 21.92 23.63 21.23 15.07 13.70 14.73 27.40 47.26 66.44 95.55 emotional context. Interpretation of 32.60 34.43 27.84 19.78 21.25 23.08 45.05 63.00 82.42 94.87 visual context. Average 21.62 25.78 24.96 15.92 16.41 16.46 32.89 53.20 70.78 96.67 Intuitively, the factual content generated in the first step will guide the model towards a robust response in the second step to produce factual and correct responses, even in the presence of noisy/misleading user questions. We illustrate the qualitative results of the DSCP method in Fig. 5. This approach leads to responses that are better grounded with the actual video content and are robust against potential lesser-quality user queries. As we will later show, the DSCP technique effectively enhances the performance of Video-LMMs on the CVRR-ES benchmark. 5 Evaluation Experiments on CVRR-ES. Video-LMMs. Both open-source and closed-source models are selected for the evaluation. Among the open-source models, we evaluate 7 recent Video-LMMs, including Video-LLaVA [Lin et al., 2023], TimeChat [Ren et al., 2023], MovieChat [Song et al., 2023], LLaMA-ViD [Li et al., 2023d], VideoChat [Li et al., 2023b] Video-ChatGPT [Maaz et al., 2023], and Video-LLaMA-2 [Zhang et al., 2023]. For evaluating closed-source models, we use Gemini-Pro-Vision [Google, 2023] and GPT-4V(vision) [OpenAI, 2023]. Refer to the Appendix A for implementation details. 5.1 Main Experiments on CVRR-ES. In Tab. 2, we present the evaluation results of Video-LMMs on the 11 dimension categories of the CVRR-ES benchmark. Below, we present several key findings. Open Source Video-LMMs struggles on CVRR-ES benchmark. All open-source LMMs show inferior performance across the different evaluation dimensions of CVRR-ES. Interestingly, some of the earlier developed open-source Video-LMMs, like Video-LLaMA, VideoChat, and Video-ChatGPT, exhibit higher performance compared to more recent models such as Video-LLaVA, MovieChat, and LLaMA-VID. Overall, TimeChat achieves the highest performance of 32.89% averaged across the 11 evaluation dimensions among open-source LMMs, followed by VideoChat with a score of 25.78%. Humans rank highest in CVRR-ES benchmark. Human studies achieve the highest performance on the CVRR-ES benchmark, with over 95% accuracy across all evaluation dimensions. Furthermore, these results suggest that the CVRR-ES QA pairs are answerable and suitable for benchmarking. Closed source models perform competitively on CVRR-ES. As shown in Tab. 2, both Gemini and GPT4V surpass the performance of open-source models and achieve high gains across all evaluation dimensions. The competitive results of GPT4V and Gemini on complex video evaluation dimensions such as partial actions, non-existent action/scene depiction, and context-dependent categories show 9 Prompting Method VideoChat Video-LLaVA MovieChat LLaMA-VID TimeChat Standard prompting 25.78 15.92 16.41 16.46 32.89 Chain of Thought (CoT) prompting 22.44 25.87 15.89 29.68 39.57 DSCP (Stage 1) 38.07 32.12 28.05 25.13 33.04 DSCP (Both stages) 47.92 37.93 35.87 46.85 39.45 Table 3: Prompting methods. DSCP stage 1 uses only the principled instructions designed in step 1, while DSCP (Both stages) uses the complete dual-step prompting technique. that these models have a more sophisticated understanding of the complex visual contents of videos and have strong capabilities to rectify misleading and confusing user questions. Overall, GTP4V improves over Gemini by 17.58% and provides an average accuracy of 70.78% on CVRR-ES. 5.2 Effectiveness of DSCP method for improving Video-LMMs performance 0 10 20 30 40 50 60 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro Video LMMs with DSCP +22.01 +19.46 +30.39 +16.15 +8.93 +22.14 +6.56 +5.02 Figure 6: Video-LMMs with DSCP technique effectively improves their performance (gains are shown in green) on CVRR-ES benchmark. We next integrate DSCP technique with VideoLMMs and present results on the CVRR-ES benchmark in Fig. 6. The results indicate that DSCP improves the model\u2019s performance compared with models that use standard prompting (i.e., using only the question itself). These results suggest that prompting techniques in Video-LMMs can better guide models for improved reasoning and robustness. With DSCP, initially low-performing Video-LMMs such as Video-LLaVa, MovieChat, and LLaMA-Vid show much better relative gains and become competitive with other models. The highest relative gain of 184% is achieved by LLaMA-ViD, which moves from 7th place in the leaderboard to 2nd among the open-source models after utilizing DSCP prompting. We observe similar overall positive trends of using DSCP with closed-source model Gemini, which improves on the benchmark by an absolute overall gain of 5.02%. We provide more detailed results comparisons in Appendix C. 5.3 Different prompting techniques. We study the contribution of each step of DSCP and compare it with chain-of-thought prompting [Wei et al., 2022b]. The results for the top 5 performing Video-LMMs are shown in Tab. 3. Chainof-thought prompting improves over the standard prompting technique in 3 out of 5 Video-LMMs, suggesting that prompting techniques from NLP literature can effectively guide multi-modal VideoLMMs to enhance reasoning and robustness. Next, we ablate on the first step of DSCP prompting, which uses the principled instructions of DSCP step 1 as a prefix alongside the actual user question. Using the first step prompting technique of DSCP substantially improves model performance on all Video-LMMs, suggesting the effectiveness of the principled prompt instructions designed specifically for Video models. DSCP with both steps, which integrates an additional thinking step in the prompting step, further improves the results and provides the highest results on 4 out of 5 Video-LMMs. 5.4 Main findings and Qualitative Results Based on the results of Video-LMMs on CVRR-ES, we draw key findings and show qualitative results. These insights can serve as valuable guidance for developing the next generation of Video-LMMs, aiming to make them more robust and reliable when deployed in real-world applications. Models excelling at standard VQA benchmarks struggle on CVRR-ES benchmark. Our analysis in Sec. 5.1 reveals that the latest open-source Video-LMMs, such as Video-LLaVA, MovieChat, and LLaMA-VID, perform less effectively on the CVRR-ES benchmark compared to Video-LMMs that were introduced earlier in the community, such as VideoChat and Video-ChatGPT. Interestingly, the same recent models demonstrate superior performance on general video comprehension benchmarks. This discrepancy suggests that current VQA benchmarks, like ActivityNet-QA [Yu et al., 2019] and MSRVTT [Xu et al., 2017], do not adequately correlate with the complex video reasoning and robustness scenarios highlighted in our benchmark. Consequently, this also indicates that most newer Video-LMMs are heavily trained to excel on the general video comprehension benchmarks while reducing their generalizability, reasoning, and robustness capabilities. Over-affirmative behavior of open-source Video-LMMs. Another important observation about open-source models is their tendency to exhibit excessively positive and affirmative responses. As shown in Fig. 7, open-source Video-LMMs consistently respond with \"Yes\" even when faced with 10 confusing questions that describe non-existent actions and objects. This highlights the vulnerability of these models when interacting with users in real-world scenarios. In our CVRR-ES benchmark, opensource models are particularly vulnerable to our evaluation dimensions of \"Non-existent actions with the existent scene\" and \"Non-existent actions with the non-existent scene\" compared to closed-source models. These models lack negation and self-rectification capabilities, especially when users provide misleading or confusing questions. We conjecture that such behavior arises due to the absence of negative instruction tuning pairs during the training of Video-LMMs. Tendency towards activity completion. Most open-source Video-LMMs have shown weak performance on the evaluation dimension of partial actions in CVRR-ES, which contains videos focusing on incomplete or atomic actions. To further analyze the models\u2019 behavior, we show qualitative results on such videos in Fig. 8. It can be observed that most open-source models tend to complete actions, even when only part of the action is provided in the video. For instance, Video-LLaVA struggles to reason over the video and describes the man as kicking the soccer ball, while the action in the video stops at the point of the man placing his foot beside the ball. We observe similar behavior in other Video-LMMs. Upon examining the fine-tuning strategies [Maaz et al., 2023, Liu et al., 2023b], we find that almost all models are trained on end-to-end actions-based instruction-tuning data, causing them to generate complete action descriptions at inference. This tendency highlights the vulnerability of Video-LMMs after deployment, as real-world scenarios often involve atomic, sub-atomic, and general actions alike. To improve the performance of Video-LMMs, it is crucial to incorporate diverse action types during training, including partial and incomplete actions. Weak Generalization to extreme OOD videos. The evaluation dimension of unusual and physically anomalous activities in CVRR-ES resembles extreme out-of-distribution video examples. With the exception of GPT4V and Gemini, Video-LMMs struggle with this dimension, indicating weak generalizability towards OOD videos containing the coexistence of unusual objects and activities that are extremely rare in typical videos. For instance, Video-LLaVA in Fig. 9 describes a person falling on the street, while the video actually shows the person performing an optical illusion. To be responsibly deployed in real-world applications, where OOD actions occur more frequently, Video-LMMs need to be trained to perform more robustly on OOD samples. This may involve incorporating diverse and atypical examples in the training data to improve the model\u2019s ability to handle unusual situations. Limited understanding of temporal order in complex videos. The CVRR-ES benchmark results show that Video-LMMs perform relatively better on the fine-grained action dimension compared to the time-order understanding dimension. While these models can accurately identify fine-grained actions, they struggle with comprehending the correct temporal order of these actions within a video. This limitation can lead to misinterpretations of the underlying information depending on temporal order. We present failure cases of this dimension in Fig. 10. For building more advanced world-centric Video-LMMs, it is crucial to enhance their ability to process and interpret event sequences accurately. Video-LMMs struggles in understanding the emotional and social context. For more reliable interaction between Video-LMMs and humans in practical scenarios, these models should comprehend the spatio-temporal scenes with social and contextual reasoning capabilities similar to humans. The lower performance of Video-LMMs on social and emotional contextual dimensions in CVRR-ES highlights their limitations and lack of understanding of scenes based on contextual cues. For instance, as shown in Fig. 11 (bottom row), GPT-4V struggles to comprehend a scene where a worker is attempting to prevent shoes from getting wet due to the rain by moving them under the shade. Instead, GPT-4V provides a response that contradicts the social cues present in the video. 6 Conclusion Given the expanding role of Video-LMMs in practical world-centric applications, it is vital to ensure that these models perform robustly and exhibit human-like reasoning and interaction capabilities across various complex and real-world contexts. In this work, we present the CVRR-ES benchmark for Video-LMMs, aiming to evaluate Video-LMMs on these very fronts. Through extensive evaluations, we find that Video-LMMs, especially open-source ones, exhibit limited robustness and reasoning capabilities over complex videos involving real-world contexts. Based on our analysis, we formulate a training-free prompting technique that effectively improves the performance of Video-LMMs across various evaluation dimensions of the CVRR-ES benchmark. Furthermore, we analyze and investigate the failure cases of Video-LMMs on the CVRR-ES benchmark and deduce several important findings. We hope that the CVRR-ES benchmark, accompanied by our extensive analysis, will contribute towards building the next generation of advanced world-centric video understanding models. 11 Are there other players playing in the background while the person picks up the ball? No, there are no other players shown playing in the background while the person picks up the ball. No, there are no other players shown playing in the background.. yes. Yes, there is a football field in the background where other players can be seen \u2026 GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, there are other players playing in the background... Yes, there are other players shown playing in the background while the person picks up the ball \u2026 Yes, there are other players shown playing in the background ... Does the video feature a hand filling the cup with liquid? The video does not feature the hand filling the cup with liquid. a liquid is visible on the surface \u2026 liquid was spilled from the cup rather than poured into it. No. Yes. Yes, the video shows a man using a plastic bottle to fill a red glass cup with a pink liquid. Yes, the video features the hand filling the cup with liquid. Yes, the video shows the hand filling the cup with liquid. Yes, the video shows the hand pouring water into the cup and then pouring it out. Video-ChatGPT Figure 7: Over affirmative behaviour. Most open-source Video-LMMs exhibit overly affirmative behavior by consistently agreeing with user questions, even when the questions are confusing or inaccurate. Does the woman mount the camel after caressing it? \u2026 woman is seen interacting with the camel, where she appears to be caressing or patting it. However, none of the images show her mounting the camel. No, the woman does not mount the camel after caressing it. Yes. Yes, at the end of the video, the woman is mounting the camel after caressing it. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, the woman mounts the camel after caressing it. Yes, after caressing the camel, the woman mounts it and rides it around the pen. Yes, after caressing the camel, the woman mounts it. What is happening in the video? The video appears to be a tutorial demonstrating the technique of kicking \u2026 approaching the ball from an angle and planting their foot beside the ball just before making the kick. A man is demonstrating how to place your plant foot beside the ball when shooting a soccer ball. In the video, a man is showing off his ball control skills by doing various moves on a field of grass. In the given video, we see a young man playing soccer with an orange soccer ball on a green field. There is a green soccer ball visible, and we can also see a man wearing a black shirt playing soccer. In the video, a man is seen kicking a soccer ball on a field. \u2026 and is seen kicking \u2026 \u2026 a woman is seen standing on a field and kicking a ball around. She then proceeds to kick the ball In the video, a man is seen kicking a soccer ball on a field while wearing a red shirt and black shorts. Video-ChatGPT Figure 8: Action completion tendency. Most open-source Video-LMMs tend to generate captions corresponding to complete actions and struggle with determining incomplete or partial actions. 12 What is happening in the video? \u2026 performing a lifting exercise known as the bench press. The individual is lying on a bench, pushing a barbell with weights upward and then lowering it back towards their chest in a controlled manner. A woman is lying on a yoga mat with a barbell resting on her lower abdomen. She then lifts her legs and places her feet on the barbell and proceeds to do a series of leg and hip exercises. A woman is lying on the floor and performing squats while holding a heavyweight barbell. we see a young woman working out on a mat with a black bar and a gym. She lifts the bar over her head and continues to do so for several minutes. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat a woman is seen doing a workout on a weight machine. She is seen lifting the weight and performing a variety of \u2026 a woman is seen lying on the floor with a barbell on her stomach. She then lifts the barbell up and down several times before lying back down. a woman is seen lying on the floor with her arms and legs crossed. She then lifts her head and legs simultaneously and holds them up for a few seconds before dropping them back down. What is the unusual aspect shown in the video? The unusual aspect shown in the video appears to be a person performing a \\\"levitating\\\" trick where they seem to be floating in mid-air while holding an umbrella and a hat has fallen onto the ground. \u2026 The unusual aspect of the video is that the man appears to be defying gravity by leaning backwards at an impossible angle. The unusual aspect shown in the video is a man doing an elaborate dance routine using a large and heavy cart. The video shows a group of people dressed in black suits and white shirts doing a dance on a busy city street. The unusual aspect shown in the video is the man doing a handstand on a skateboard. \u2026 in the video is the man's ability to breakdance while holding an umbrella. \u2026 The unusual aspect of the video is that the man is dancing and jumping on the sidewalk while holding an umbrella. Video-ChatGPT Figure 9: Weak generalization on OOD videos. Open-source Video-LMMs struggle to correctly reason over videos containing rare and unusual actions. Is the hand movement from left to right in the video? It appears that the hand movement shown is from right to left, rather than from left to right. yes. yes. Yes, the hand movement in the video is from left to right. The hand appears to be pointing at the food. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, the hand is moving from left to right in the video.. Yes, the hand movement in the video is from left to right in the \u2026 Yes, the hand movement in the video is from left to right. The .... Is the person running towards the camera or running in the direction away from the camera? The person in the images is running backwards, away from the camera. You can tell because they are \u2026 The person is running backwards toward the camera. running towards the camera In the given video, the person is running towards the camera. The person is running towards the camera. The person is running towards the camera. The person is running towards the camera. Video-ChatGPT Figure 10: Limited temporal understanding. Most Video-LMMs struggle to accurately determine the temporal order of actions in videos. The bottom video shows a man running backward along a track. 13 Identify if the man calmly walk away after putting the ball in the hole? the man does not calmly walk away after putting the ball in the hole. The sequence shows him experiencing frustration after the putt; \u2026 No, he takes the flag out of the hole and throws it. He then walks away. yes. The man is walking calmly after putting the ball into the hole during a golf game. The scene shows \u2026 GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, the man calmly walks away after putting the ball \u2026 Yes, the man in the video calmly walks away after putting the ball in the hole. Yes, the man calmly walks away after putting the ball in the hole. What is the person in orange uniform doing and why? The person in the orange uniform appears to be a worker involved in some sort of maintenance or cleaning activity. Looking at the \u2026 The person in orange uniform is a street sweeper. She is sweeping up the sidewalk and putting the trash into a cart. The person in orange uniform is sweeping the street and removing trash. \u2026 \u2026 It is not clear what the person in the orange uniform is doing, but it appears to be standing and observing the street corner. The person in the orange uniform is standing on a street corner and appears to be talking to someone. \u2026 The person in orange uniform is cleaning the sidewalk with a small white cloth. \u2026 The person in the orange uniform is picking up trash on the side of the road. Video-ChatGPT Figure 11: Limited contextual understanding. Most Video-LMMs exhibit a weak understanding of complex videos that contain emotional (e.g., an angry player in the top video) and social cues (e.g., a person saving shoes from getting wet due to rain in the bottom video).", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2403.11085v3", + "title": "m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks", + "abstract": "Real-world multi-modal problems are rarely solved by a single machine\nlearning model, and often require multi-step computational plans that involve\nstitching several models. Tool-augmented LLMs hold tremendous promise for\nautomating the generation of such computational plans. However, the lack of\nstandardized benchmarks for evaluating LLMs as planners for multi-step\nmulti-modal tasks has prevented a systematic study of planner design decisions.\nShould LLMs generate a full plan in a single shot or step-by-step? Should they\ninvoke tools directly with Python code or through structured data formats like\nJSON? Does feedback improve planning? To answer these questions and more, we\nintroduce m&m's: a benchmark containing 4K+ multi-step multi-modal tasks\ninvolving 33 tools that include multi-modal models, (free) public APIs, and\nimage processing modules. For each of these task queries, we provide\nautomatically generated plans using this realistic toolset. We further provide\na high-quality subset of 1,565 task plans that are human-verified and correctly\nexecutable. With m&m's, we evaluate 6 popular LLMs with 2 planning strategies\n(multi-step vs. step-by-step planning), 2 plan formats (JSON vs. code), and 3\ntypes of feedback (parsing/verification/execution). Finally, we summarize\ntakeaways from our extensive experiments. Our dataset and code are available on\nHuggingFace (https://huggingface.co/datasets/zixianma/mnms) and Github\n(https://github.com/RAIVNLab/mnms).", + "authors": "Zixian Ma, Weikai Huang, Jieyu Zhang, Tanmay Gupta, Ranjay Krishna", + "published": "2024-03-17", + "updated": "2024-03-21", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "Planning agents\u2014powered by large language models (LLMs)\u2014are becoming increasingly proficient at decom- posing user-specified tasks into a series of subtasks, where each subtask is executed by invoking tools. For example, if a user wants to learn about a new piece of clothing in an image, the LLM planner can create a plan with multiple steps: first, it invokes an image classification tool to iden- tify the clothing as a \u201ckimono\u201d; once identified, it can issue a Wikipedia search query to explain the cultural relevance of the kimono (Figure 1 first row). LLM planning agents typically consist of an LLM and a set of tools to plan over. Given an LLM and toolset, the design space of planning agents is extremely rich, involving many decisions such as planning strategy (e.g. generation of the whole plan vs one step of the plan at a time), forms of feedback (e.g. no feedback or pars- ing/verification/execution feedback), and plan format (e.g. JSON strings that specify tools and arguments vs free-form Python code). Unfortunately, there is no existing planning bench- mark that supports evaluation along this combinatori- ally rich design space with a realistic set of multimodal tools. Recent concurrent benchmarks such as ToolEmu and TaskBench [21, 24] provide user queries and ground truth plans but lack support for realistic plan execution. For in- stance, TaskBench assumes that a list of tools is available for planning without providing actual implementation of the tools. TaskBench also does not instantiate the queries with actual inputs and uses placeholder input filenames like \u201cex- ample.png\u201d that do not exist. ToolEmu likewise uses LLMs to emulate tool execution instead of providing tool imple- mentations. The lack of actual implementations of tools and real execution feedback while planning makes the study of the design space elucidated above unrealistic at best, if not impossible. Motivated by this dire need for a standardized bench- mark for studying the design space of multi-step multi- modal planning agents, we first propose the m&m\u2019s bench- mark. m&m\u2019s contains 4K+ realistic user tasks and auto- matically generated task plans. 1565 of these task plans are human-verified and executable with 33 curated tools con- sisting of multi-modal models, (free) public APIs, and im- age processing modules. Next, we use m&m\u2019s to systematically study the im- pact of 2 planning strategies (step-by-step and multi-step), 2 kinds of feedback (verification and execution), and 2 plan formats (JSON and code). Through extensive experimen- tation with 6 popular open-source (LLaMA-2-7B, LLaMA- 2-13B, Mixtral-8x7B) and proprietary LLMs (Gemini-pro, GPT-3.5, GPT-4) of varying sizes, we provide a series of findings: First, existing LLMs instructed to perform multi- step planning consistently outperform step-by-step plan- ning, regardless of the model size. This is in contrast to the popularity of ReAct-style [40] planning that is predom- inant today. Second, feedback improves LLM\u2019s ability to predict the correct argument name for each tool and gener- ate overall executable tool plans but doesn\u2019t necessarily im- prove the ability to choose the right tools. Third, most mod- els (except for Llama-2-7b) perform comparably on tool prediction with JSON-format generation and Python code generation, but they all produce more executable plans with JSON-format generation than with code generation. Taken together, our experiments suggest that for m&m\u2019s tasks, multi-step planning in JSON with feedback can result in the best overall tool-use performance compared to step-by-step planning, code generation, or the same setup without feed- back.", + "main_content": "We situate our work amongst the ever-growing number of tool-use research. Planning evaluations. Although many tool-use variants have been proposed, evaluating LLMs on tool-use still lacks a standardized protocol. For instance, VisProg and ViperGPT evaluate their plan\u2019s executions on vision tasks using a Python-like code format [7, 29]. HuggingGPT evaluates only the plan accuracy (did the agent choose the right tools) without executing the proposed plans [23]. ToolFormer [22] and ToolLLaMA [18] both use natural language instead of code to interface with tools; while ToolFormer generates a multi-step plan all at once and evaluates the program\u2019s execution, ToolLLaMA generates the plan step-by-step, with self-feedback to correct mistakes. ToolLLaMA evaluates only the plans while ToolFormer evaluates both plans and executions. Unfortunately, no single benchmark evaluates planning agents along this combinatorial design space, which is what we contribute. Tool-use benchmarks. Today, tool-use evaluation is spread out across a number of diverse benchmarks, including HotpotQA, WebShop, GQA, RefCOCO, and NLVR [10, 11, 27, 37, 38]. None of these contains ground truth plans, conflating planning errors with execution error. In other words, it is hard to separate whether an LLM failed to propose the correct plan or whether one of the tools used in the plan failed. In response, recent concurrent efforts have proposed new benchmarks, such as ToolEmu, TaskBench, and GAIA [15, 21, 24]. They do contain ground truth plans but fail to support evaluating plans\u2019 execution results. Planning strategies. There are multiple strategies for planning. For instance, Psychology literature reveals that people rarely plan tasks in their entirety due to the cognitive cost of planning long-range tasks [3]. Instead, they plan the first couple of subtasks, and execute them before planning the rest [1, 3]. In the tool-use literature, we identify two primary forms of planning strategies: step-by-step planning [5, 18, 40] and multi-step planning [7, 23, 29]. Table 1. Compared to previous tool planning benchmarks, m&m\u2019s contains multimodal queries that are more realistic and executable. *: MetaTool only considers Open AI plugins as tools. #: The queries of TaskBench contain textural placeholder of other modality data such as images, while queries of m&m\u2019s come with real images. ToolBench ToolEmu TaskBench MetaTool m&m\u2019s [18] [21] [24] [9] (ours) Query Real multi-modal inputs? \u2717 \u2717 \u2717# \u2717 \u2713 Verified by human? \u2717 \u2713 \u2713 \u2713 \u2713 Tools Are all tools executable? \u2713 \u2717 \u2717 \u2713 \u2713 Multi-modal models \u2717 \u2717 \u2713 * \u2713 Plan Format JSON JSON JSON JSON JSON/Code Scale Number of unique tools 3,451 36 103 390 33 Number of queries 126k 144 17K 20k 1.5k Similar to people, step-by-step planning generates plans sequentially with one subtask at a time. By contrast, multistep planning creates the entire plan before executing any subtask. Unfortunately, these two strategies have not been systematically compared; we systematically compare both across multiple open-source and close-source LLMs. Feedback mechanisms. LLM planners make mistakes, stitching together tools that fail to execute or worse, fail to compile. Although human feedback is one mechanism to align plans with human expectations and preferences [2, 33], they require real users, making evaluation stochastic. However, there have been several automatic mechanisms that can improve plans [32, 41]. For instance, syntactic mistakes can easily be detected using external verifiers and can guide planners to iterate on their plans [8, 14, 16, 25]. Others require examining the output of individual subtask executions [20, 28, 31, 40, 43]. In this work, we compare plan parsing/verification feedback as well as tool execution feedback. 3. m&m\u2019s: the benchmark To facilitate the study of LLM planners for multi-step multimodal tasks, we curate the m&m\u2019s benchmark. Before describing the dataset generation process, we first formalize the tool-planning problem in Sec 3.1. We then describe our benchmark creation process in Sec. 3.2 and present dataset statistics in Sec. 3.3. 3.1. Formalizing multi-step multi-modal tool-use Given a tool set T , and the user query Q, a planner is required to produce a plan P that consists of a sequence of tool invocations P = [t1({ak 1 = vk 1}k), t2({ak 2 = vk 2}k), \u00b7 \u00b7 \u00b7 , tm({ak m = vk m}k)], where tj represents the jth tool in the plan, and ak j , and vk j represent tool tj\u2019s kth argument name and value respectively. Note that the output of tj may be used as argument values for subsequent tools tj+1:m. m&m\u2019s contains a set of N query-plan pairs, i.e., {(Qi, Pi)}i\u2208[N] with each plan composed of executable tools chosen from a curated set of API-calling functions, multi-modal models, and image processing modules. 3.2. Dataset generation To create such a dataset, our data generation process consists of five major steps: 1 We construct a valid tool graph with all the tools and sample a subgraph from it as a sequence of tools in the target plan; 2 To instantiate the tool plan with inputs, we pair plans with real inputs (e.g., images) from existing datasets based on the first tool in the plan; 3 To generate the user query, we prompt GPT-4 with the tool graph and input pair to generate realistic user queries with few-shot demonstrations; 4 We transform the tool graph and input pair to a fully specified JSON plan (tool names with corresponding argument names and values) with a rule-based program. Additionally, we map the JSON-format plans to Python code to support code generation evaluation; 5 Finally, three human annotators verify whether each plan can solve the corresponding user query. To obtain the final tool-balanced human-verified subset, we discard some plans from the initial human-verified set to avoid an overwhelming representation of any tool. 1 Tool graph sampling. We first create a directed graph with all 33 tools as the nodes and edges denoting valid connections between nodes. A connection is valid only if the output of the source tool matches the expected input type of the target tool. For example, there is an edge between image classification and wikipedia simple search, because the output of image classification a text label is a valid input type for wikipedia simple search. We then sample subgraphs from the full tool graph to obtain tool sequences with valid tool dependencies. 2 Input example sampling. We now need to instantiate Figure 2. Data generation pipeline. As shown in panel A, our generation process consists of five main steps: input example sampling, tool graph sampling, user query generation with GPT-4, rule-based plan generation, and human verification. Panel B showcases an instantiation of this process with a real example. queries with real input examples. To do so, we first collect real-world examples from the validation sets of 11 existing datasets, including ImageNet [4], SQUAD [19], Visual Genome [12], MagicBrush [42], librispeech [17]. Then, to pair a tool graph sampled in the previous step with an input, we randomly sample an input based on the input type needed for the first tool in the graph. For example, if the first tool in a tool sequence is image classification, we randomly sample an image (e.g. \u201c16611.jpg\u201d) from ImageNet as the input. 3 Query generation. With a set of tool sequences and input examples to the first tools, we prompt GPT-4 to generate realistic user queries. Concretely, we randomly sample five different input examples for each tool sequence and ask GPT-4 to generate two queries for each tool sequence with the same input (See Appendix for the full prompt). 4 Plan generation. For plan generation, we write a rulebased program to generate a plan (i.e. an ordered list of tool names with corresponding argument names and values fully specified) for each query. Each step in the plan contains an id, tool name, and an argument dictionary with this tool\u2019s argument names as the keys and argument values as values. We populate each node\u2019s ID and name based on the sampled tool sequence and fill in the argument names for each tool using a pre-defined metadata document. We also fill in the argument values of the first tool using the input examples and those of subsequent tools using a special notation \u27e8node \u2212id\u27e9.key, where id refers to the id of the previous node and key refers to the output key. To further refine the plans to be even more faithful to queries, we rewrite the argument values of text generation and image generation (e.g. from \u201ca shark\u201d to \u201ca childfriendly book cover image of a shark\u201d) by prompting GPT-4 with the queries and original plans. 5 Human verification Finally, we perform extensive human verification on all 4427 generated query-plan pairs. We ask three expert annotators (who are undergraduate and Ph.D. students in CS) to rate each query-plan pair with 0 or 1, where 1 indicates that the plan can resolve the query perfectly. We obtain a subset of 1500+ examples on which all Figure 3. Types of feedback. We present examples of parsing, verification, and execution feedback in both success and failure cases. three annotators rate 1 and perform further filtering of examples where the plan contains much more frequent tools (e.g. image generation and text generation) to balance the overall distribution of tools (See Appendix for more details on filtering and the tool distribution). It is worth noting that two of the steps in our dataset generation pipeline draw similarities with the recently released concurrent TaskBench [24]. Similar to them, we also sample a subgraph of tools and query generation steps. However, we want to highlight two major differences: first, we leverage real-world examples as inputs to the tool sequences (in contrast to TaskBench\u2019s \u201cexample.jpg\u201d, \u201cexample.wav\u201d etc.), which not only leads to a more realistic instantiation of queries but also enables plan execution on actual input which is crucial for studying the role of feedback in planning agents. Second, we use a rule-based program instead of GPT-4 to obtain the ground truth plans based on the sampled tool sequences, which eliminates the possibility of hallucinated and incorrect plans. 3.3. Dataset quantity and quality Overall, m&m\u2019s contains a large quantity of diverse Table 2. The statistics of the m&m\u2019s dataset. Item Number Raw examples 4427 Human verified examples 1565 Human verified & balanced examples 882 1 / 2 / 3-tool examples 70 / 159 / 653 Tools 33 ML model / image processing / API 13 / 11 / 9 Tool graphs 317 Avg. # of unique queries per tool graph 2.78 ecologically-valid task queries (see Figure 1). Each task is associated with human-verified and executable plans (Table 2). Concretely, there are a total of 4427 raw examples in m&m\u2019s, where 1565 have been verified to be correct by three human annotators. After additional filtering for a balanced tool distribution (See Appendix for more details), we select a subset of 882 examples for our evaluation. Tasks are granular in difficulty with 70 queries that require a single tool, 159 need two tools, and 653 need three tools. In terms of tools, there are 33 unique tools in total across three different categories, of which 13 are multi-modal machine learning models on HuggingFace, 11 are image processing modules from VisProg [7], and 9 are free public APIs from RapidAPI3. Our final dataset includes 317 representative tool graphs, where each graph has multiple queries. See more examples in the Appendix. 4. Planning agent To systematically evaluate the design space of planning agents, we design a modular planning system with these components: planning LLM, parser, verifier, and executor. We implement this system with AutoGen\u2019s framework [35]. Given the user query, the LLM must iteratively generate and refine the plan. Each iteration involves generating the whole or a part of the plan and receiving feedback on the generation. Given the raw text output from the LLM at the current iteration, m&m\u2019s supports the following 3 kinds of feedback (Figure 3): Parsing feedback. The parser attempts to parse the LLM text output to either JSON or code formats and returns an error message in case of parsing failures. Plan verification feedback. The verifier checks the parsed output according to pre-defined rules and returns an error message in case of rule violations. Specifically, the verifier checks if the predicted tool exists in our provided tool list, if it forms a valid connection with the previous tool, and if the predicted argument names match the ones specified in the metadata document. Plan execution feedback. In the case of JSON output, the executor calls the functions with specified arguments in a Python environment and returns the output or execution errors. In the case of code output, the code is directly executed with outputs or errors returned as feedback. 5. Experiment Using our benchmark with a flexible agent design, we experiment with 6 instruction-tuned large language models of varying sizes (2 open-source and 4 proprietary) across different planning setups. We describe these evaluation setups in Sec. 5.1, metrics in Sec. 5.2, and our experimental findings in Sec. 5.3. 5.1. Setup We establish a unified framework to categorize LLMs\u2019 task planning setups along the three axes below. Planning strategy: Prior works formulate task planning as either step-by-step or multi-step planning[18, 23, 40]. Step-bystep planning refers to the setup where a language model 3https://rapidapi.com/hub is instructed to predict only one action at a time (Figure 4 (1b)). On the other hand, in the setting of multi-step planning, a model can predict multiple actions at once (Figure 4 (1a)). Plan format: Additionally, existing works have also adopted different plan formats for tool use: often as code, pseudo-code, or predefined structured representations such as JSON [7, 23, 29]. In this work, we primarily focus on two of the common plan formats: JSON and code. Feedback: We experiment with three kinds of feedback feedback from parsers, rule-based verifiers, and execution modules (Figure 3). Nevertheless, our benchmark can be used to study other types of feedback, such as self-feedback [13], which we leave to future work. 5.2. Evaluation metrics To holistically evaluate planning agents\u2019 performance on our benchmark, we adopt three main metrics: tool-F1, argname-F1, and pass rate. Tool-F1 is defined as the F1 score of tool name prediction, where we treat each predicted tool name as one example and compare the set of predicted tool names to the groundtruth set of tools in each plan. Similarly, argname-F1 is defined as the F1 score of argument name prediction for each tool, where we consider each (tool name, argument names) tuple as one example. Our implementation turns each tuple into a string and compares the set of predicted \u201ctool name-argument names\u201d strings to the labels. Pass rate is the percentage of predictions that execute successfully without any execution errors. It measures the executability but not the correctness of the predicted plans. We choose these three metrics because they assess two important aspects of planning and tool use: tool selection and tool invocation. A higher tool-F1 indicates better tool selection, whereas higher argname-F1 and pass rate imply improved tool invocation. To evaluate models with the same metrics in the code generation setup, we parse the generated code into an Abstract Syntax Tree (AST) with Python\u2019s AST module and extract the function names and argument names for calculating tool-F1 and argname-F1. We also provide argvalue-F1 in the supplementary for completeness but caution the reader about the challenges of evaluating argument values due to surface-form or syntactic differences in the values, particularly for free-form text arguments (e.g. the prompts in image generation and text generation). We report additional metrics, including overall plan accuracy, normalized edit distance, and edge-F1 as well as code-specific metrics such as AST accuracy and CodeBLEU in the supplementary material. 5.3. Results We first highlight the key findings from our empirical analysis and then describe each finding in more detail: 1. All planning agents perform better on tool selection with Figure 4. Illustrating the three main planning setups in our evaluation: (1a) multi-step and (1b) step-by-step JSON-format language generation [40], and (2) code generation. (Note that the prompts have been simplified for illustration. Please see the Appendix for the full prompts). multi-step planning than with step-by-step planning regardless of the underlying LLMs\u2019 sizes (Fig. 5) 2. Verification and execution feedback can help models improve tool invocation by predicting correct argument names and generating executable plans but can lead to worse tool selection due to wrong fixes (Tab. 3 and Fig. 6) 3. While models perform comparably on tool selection with JSON versus code generation, they produce more overall executable plans with JSON-format generation (Fig. 7) Models consistently perform better on tool-F1 and pass rate when instructed to perform multi-step planning instead of step-by-step planning regardless of their sizes. We find that all large language models achieve higher tool-F1 when they are instructed to perform multi-step planning compared to when they perform step-by-step prediction (Figure 5), and all models except for Gemini-pro achieve a higher pass rate with multi-step planning. Among the 6 models we evaluated, Llama-2-7b, Llama-2-13b, and GPT-3.5 all showcase a large increase (\u00bf10%) in performance with multi-step planning compared to step-by-step prediction, with the greatest increase of 21.8% for GPT-3.5. Through qualitative analysis, we learn that when models are Llama-2-7b Llama-2-13b Mixtral-8x7B Gemini-pro GPT-3.5 GPT-4 model 0 20 40 60 80 tool-F1 (a) tool-F1 planning strategy step-by-step multi-step Llama-2-7b Llama-2-13b Mixtral-8x7B Gemini-pro GPT-3.5 GPT-4 model 0 20 40 60 80 100 pass rate (b) pass rate planning strategy step-by-step multi-step Figure 5. Comparing planning strategies. We find that models consistently perform better on tool-F1 under multi-step prediction compared to under step-by-step prediction regardless of their sizes. Similarly, all models except for Gemini-pro achieve a higher pass rate with multi-step prediction. Table 3. We present the tool-F1 and argname-F1 of models with various feedback, where P, V, and E represent parsing, verification, and execution feedback respectively. We use parsing feedback only (P) under multi-step planning and JSON-format language generation as the basis, while showing the \u2206of those with other feedback combinations compared to parsing feedback only. We find that verification and execution feedback can improve models\u2019 performance on argument name prediction and pass rate, but can hurt tool selection. tool-F1 argname-F1 pass rate model P PV PE PVE P PV PE PVE P PV PE PVE Llama-2-7b 29.78 -2.94 -2.59 -2.58 34.03 2.03 1.24 1.15 28.23 18.14 10.32 13.72 Llama-2-13b 42.27 -3.45 -2.78 -4.57 45.07 3.94 3.08 3.29 38.10 29.93 32.99 23.92 Mixtral-8x7B 66.79 1.18 -0.11 -0.04 72.52 2.00 1.89 2.72 75.74 10.32 8.96 10.77 Gemini-pro 69.38 1.18 -0.11 -0.04 73.37 2.00 1.89 2.72 77.32 13.27 14.06 16.67 GPT-3.5-turbo-0125 80.52 -0.65 -2.80 -2.56 84.86 0.65 -0.92 -0.86 89.46 6.69 7.26 6.92 GPT-4-0125-preview 88.46 -0.60 0.25 -0.91 89.81 -0.18 0.48 0.32 97.73 1.13 -1.25 2.15 Note: we use the experiments with parsing feedback instead of no feedback at all as the baseline to highlight external feedback\u2019s effects on tool selection and invocation instead of parsing. We include the results of experiments with no feedback in the Appendix. 30 40 50 60 70 80 90 100 pass rate 30 40 50 60 70 80 90 tool-F1 model Llama-2-7b Llama-2-13b Mixtral-8x7B Gemini-pro GPT-3.5 GPT-4 feedback P PVE Figure 6. Comparing without vs. with feedback. P = parsing feedback, PVE = parsing, verification, and execution feedback. We find that feedback greatly improves planning agents\u2019 pass rates across different model sizes, especially for Llama-7b, Llama-13b, and Gemini-pro. However, feedback can also harm models\u2019 tool prediction performance and decrease their tool-F1 by up to 5%. instructed to perform step-by-step prediction, they tend to output \u201cTERMINATE\u201d after they receive positive feedback (e.g. \u201cParsing/verification/execution succeeded\u201d) from the environment, disregarding whether the user request has been fulfilled. This means that they often predict fewer steps than required and miss necessary tools to resolve the requests (See Figure 8 A). External feedback can improve planning agents\u2019 performance on argument name prediction and pass rate. On the effects of external feedback, we find that both verification and execution feedback can lead to slightly better argname-F1 and much higher pass rates (Table 3), indicating that feedback can help models predict correct argument names and generate more executable plans. With feedback, most models can increase argname-F1 by around 1-4% and pass rate by up to 20-30% (Table 3). There are only a few exceptions on GPT-3.5 and GPT-4, which already obtain relatively high performance without feedback and experience around 1% drop in argname-F1 and/or pass rate with feedback (Table 3). Also, verification feedback can be more helpful than execution feedback on argument name prediction. In qualitative analysis, we find that this is because our verifier pinpoints where the error occurs and outputs a targeted and thus more helpful feedback message. On the other hand, the execution module returns the error message Llama-2-7b Llama-2-13b Mixtral-8x7B Gemini-pro GPT-3.5 GPT-4 model 0 20 40 60 80 tool-F1 (a) tool-F1 format JSON code Llama-2-7b Llama-2-13b Mixtral-8x7B Gemini-pro GPT-3.5 GPT-4 model 0 20 40 60 80 100 pass rate (b) pass rate format JSON code Figure 7. Comparing plan formats. We find that all models except for Llama-7-b perform comparably on tool-F1 with JSON-format and code generation. However, JSON-format generation leads to a much higher pass rate across all models. Figure 8. We present examples of three common errors (A-C) in step-by-step planning and multi-step planning with JSON-format generation as well as in code generation with various feedback types. as it is, which can be vague and obscure, thus confusing the model and even resulting in wrong fixes (Figure 8 B). While we see generally positive effects of feedback on argname-F1 and pass rate, we also observe that feedback can lead to a small decrease (up to 4.5%) in models\u2019 tool-F1. We observe that this is mainly because models can change some correct tools to the wrong ones or remove them even though the feedback instructs them to only fix the erroneous parts in the plan (Figure 8 B). One way to mitigate this error can be using more fine-grained and localized feedback [36]. Additionally, neither verification feedback nor execution feedback provides useful information on the correctness of the tool selection and increases their performance on tool-F1. Nevertheless, we also note that the decrease in tool-F1 with feedback is a lot smaller compared to the gains in pass rate (Figure 6), which suggests feedback can greatly improve tool invocation at a small cost to tool selection. Models perform comparably on tool-F1 with JSONformat and code generation but much worse on pass rate with code generation. In addition to planning strategy and feedback, we learn that plan formats can also influence models\u2019 tool use performance (Figure 7), especially on the executability of the generated plans. Concretely, our experiments show that while all models except for Llama2-7b achieve similar tool-F1s (<3% difference) with JSONformat generation and code generation, they all suffer from a large drop in pass rate with code generation. Upon qualitative analysis, we find that one common execution error in code generation is failing to access the output from a tool (See Figure 8 C), which can be due to missing the output or accessing the output differently from what the instruction specifies and the tool implementation expects. While the same error also happens to JSON-format generation, it occurs less frequently due to the more rigid structure of JSON. These results suggest that JSON-format generation is preferable to code generation when the executability of generated plans matters. 6. Discussion 6.1. Limitations There are a few limitations to our benchmark and evaluation. First, m&m\u2019s only considers sequential task plans, which represent a majority of real-world user requests. However, some tasks might require dynamic task plans depending on the output for one subtask [6]. Dynamic plans require a more complex tool graph sampling procedure. Second, as our main goal is to study the effects of different planning formulations and types of feedback, we do not investigate another dimension of planning design: prompt style. We use direct and ReACT-style [40] prompting and exclude more sophisticated prompting strategies such as tree-of-thoughts prompting [34, 39]. Third, a few tools in our benchmark are generative, which makes the evaluation of the actual execution results subjective (See Appendix) [26, 30]. 6.2. Conclusion In conclusion, we highlight three major contributions of our work: first, we introduce a new benchmark m&m\u2019s to support comprehensive and rigorous evaluation of tooluse abilities of planning agents for multi-step multi-modal tasks. m&m\u2019s contains a large and diverse set of queries and human-verified and executable plans; second, we characterize the design space of existing tool-use methods and conducted a systematic study of 6 LLMs with different design choices, including planning formulations, plan formats and various types of feedback; finally, our experiments reveal three takeaways, suggesting that current generation of LLMs demonstrate gains in tool-planning performance on m&m\u2019s when using multi-step planning, outputting plans in JSON format, and using parsing, verification, and execution feedback. We hope m&m\u2019s enables further investigation into better planning formulations that incorporate richer and more diverse kinds of feedback for solving multistep, multi-modal tasks. 7. Acknowledgement This work was partially funded by a Sony grant. It was also made possible because of OpenAI\u2019s credit grant." + }, + { + "url": "http://arxiv.org/abs/2305.16103v1", + "title": "ChatBridge: Bridging Modalities with Large Language Model as a Language Catalyst", + "abstract": "Building general-purpose models that can perceive diverse real-world\nmodalities and solve various tasks is an appealing target in artificial\nintelligence. In this paper, we present ChatBridge, a novel multimodal language\nmodel that leverages the expressive capabilities of language as the catalyst to\nbridge the gap between various modalities. We show that only language-paired\ntwo-modality data is sufficient to connect all modalities. ChatBridge leverages\nrecent large language models (LLM) and extends their zero-shot capabilities to\nincorporate diverse multimodal inputs. ChatBridge undergoes a two-stage\ntraining. The first stage aligns each modality with language, which brings\nemergent multimodal correlation and collaboration abilities. The second stage\ninstruction-finetunes ChatBridge to align it with user intent with our newly\nproposed multimodal instruction tuning dataset, named MULTIS, which covers a\nwide range of 16 multimodal tasks of text, image, video, and audio modalities.\nWe show strong quantitative and qualitative results on zero-shot multimodal\ntasks covering text, image, video, and audio modalities. All codes, data, and\nmodels of ChatBridge will be open-sourced.", + "authors": "Zijia Zhao, Longteng Guo, Tongtian Yue, Sihan Chen, Shuai Shao, Xinxin Zhu, Zehuan Yuan, Jing Liu", + "published": "2023-05-25", + "updated": "2023-05-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.MM" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "Humans interact with the world through multiple modalities \u2014 we see objects, hear sounds, feel textures, smell odors, speak words, and so on. By leveraging complementary information from each modality, we obtain a comprehensive understanding of our surroundings. In order for Artificial Intelligence to complete various real-world tasks in the wild, it needs to be able to interpret, relate, and reason about information from multiple modalities. Significant processes have been made in multimodal learning applications, including vision and language learning [13, 33, 4, 32], video understanding [12, 39], audio-visual speech recognition [36], autonomous driving etc. However, current paradigms in multimodal learning often still require acquiring all types and combinations of paired data, and their capabilities are often limited to solving specific tasks with model tuning, e.g., visual question answering, sentiment analysis etc. In this paper, we present ChatBridge, a unified multimodal model that harnesses the power of advanced large language model (LLM) as a language catalyst to interpret, correlate, and reason about various modalities, and can perform zero-shot tasks of human instructions through multi-round dialogues. Large language models, such as ChatGPT [40], GPT-4 [41], and LLAMA [54], have demonstrated exceptional proficiency in understanding and generating human-like text. They show that language can act as a universal interface for a general-purpose assistant, where various tasks can be explicitly represented and responded to in language. By extending LLMs\u2019 capabilities to incorporate diverse multimodal inputs, we devise a multimodal language model that can perceive Preprint. Work in progress. arXiv:2305.16103v1 [cs.CV] 25 May 2023 real-world modalities, as well as follow instructions, think, and interact with humans in natural language. Our method doesn\u2019t require datasets where all modalities co-occur with each other. Instead, we leverage language as the catalyst to bridge modalities \u2013 we only require easy-acquired, language- paired two-modality data (e.g., image-text pairs, video-text pairs, audio-text pairs, etc.). Such strategy leads to an emergent multimodal correlation and collaboration across all of the modalities, enabling zero-shot perception capabilities on multimodal inputs without explicitly paired training data (e.g. the rare video-audio-text triples data where the text description describes both the video and audio contents). Specifically, ChatBridge integrates multiple modality-specific encoders and an LLM, Vicuna [14], which is built upon LLaMA [54], with learnable perceiver modules in between to project embeddings from different modalities into the semantic space of LLM. ChatBridge undergoes a two-stage training on large-scale language-paired two-modality data and self-built multimodal instruction-following data. In the first stage, we pretrain ChatBridge to align each modality with language, which brings emergent multimodal correlation and collaboration abilities with LLM as a language catalyst. In the second stage, we instruction-finetune ChatBridge to align the model with user intent on our newly collected MULTimodal InStruction tuning dataset (MULTIS), enabling more effective zero-shot generalization on multimodal tasks. MULTIS covers a wide range of 16 multimodal task categories and 15 source datasets involving image, video, and audio content. It consists of both standardized task-specific data and open-ended multimodal chat data. Language Image Video Audio ChatBridge Figure 1: ChatBridge overview. We har- ness the power of advanced LLM as the catalyst to bridge modalities with easy- acquired, language-paired two-modality data (e.g., image-text, video-text, and audio-text), resulting in a multimodal LLM with emergent multimodal correla- tion and collaboration abilities across all of the modalities. Our paper makes the following contributions: \u2022 We propose ChatBridge, an approach to learning a unified multimodal model to interpret, corre- late, and reason about various modalities without relying on all combinations of paired data. \u2022 We collect MULTIS, a multimodal instruction tuning dataset that consists of 16 diverse mul- timodal tasks covering text, image, video, and audio modalities. \u2022 We quantitatively and qualitatively evaluate Chat- Bridge on a wide range of zero-shot multimodal tasks covering text, image, video, and audio modalities, and demonstrate that ChatBridge achieves strong zero-shot performance on these tasks. \u2022 We will open-source the codebase, the MULTIS data, the model checkpoint, and a multimodal chat demo to facilitate future research toward building a general-purpose multimodal assistant.", + "main_content": "2.1 Multimodal Learning Multimodal learning aims to acquire knowledge from diverse forms of data and apply it to various tasks involving multiple modalities. Some approaches typically employ joint learning strategies and generate a merged embedding as the complementary multimodal representation across two modalities, including vision-language [33, 13, 32, 4], audio-visual [64, 44, 30], RGB-D [61, 68, 9], and speech-text [71], among others. In contrast, some other methods focus on learning separate representations for each modality and then aligning them in a shared latent space with metric learning. The seminal work CLIP [46] initially maps visual and textual embeddings into the same space, achieving remarkable performance in retrieval and classification tasks. Subsequent methods [23, 39] have extended this paradigm to incorporate additional modalities. More recently, ImageBind [21] has proposed a multimodal method for aligning different modalities with images, following a similar paradigm. 2 ChatBridge projects embeddings from different modalities into the semantic space of LLM and facilitates modality collaboration within LLM. This integration harnesses the strengths of both multimodal learning methods, resulting in a more comprehensive and effective approach. 2.2 LLM and Multimodal LLM In recent years, there has been remarkable progress in the development of LLMs, particularly in the context of GPT-3[6]. As a result, numerous LLMs have been introduced, such as OPT[67], BLOOM[47], PaLM[15], GLM[66], and LLaMA[54]. The success of the general purpose assistant ChatGPT[40] has inspired researchers to explore methods for emulating its proficiency by employing instruction tuning techniques on language models [43, 58, 53, 14, 16] and vision-language models [35, 62, 73]. Despite the successful applications of LLMs in processing language, their ability to comprehend modalities beyond text, such as vision and audio, remains a challenge. Recently, researchers have made efforts to extend language models to incorporate visual inputs, employing two distinct paradigms: systematic collaboration and end-to-end trained approaches. Systematic collaboration approaches, exemplified by Visual ChatGPT [59], MM-REACT [63], HuggingGPT [50] and ChatVideo [56], leverage various vision experts or tools to express visual information through textual descriptions. In these methods, LLMs, such as ChatGPT [40], act as agents and are prompted to select the appropriate experts and tools for visual comprehension. On the other hand, end-to-end trained approaches utilize LLMs to construct unified image-based multimodal models. Flamingo [3] freezes the pretrained vision encoder and LLM, integrating visual and language modalities using gated cross-attention, which exhibits impressive few-shot capabilities. BLIP-2 [31] employs Q-Former to align visual features from the frozen visual encoder and LLMs. Additionally, PaLM-E [18] directly incorporates features from sensor modalities into PaLM [15]. There are also several methods built on open-sources LLM LLaMA [54] and its variations Alpaca [53] and Vicuna [14], including LLaVA [35], MiniGPT4 [73] and mPLUG-Owl [65]. 3 Methods ChatBridge is a multimodal language model capable of perceiving real-world multimodal information, as well as following instructions, thinking, and interacting with humans in natural language. In this paper, we consider the modalities of image, video, and audio, while deferring the integration of additional modalities such as sketch and point cloud to future works. 3.1 Architecture Overview As illustrated in Figure 2, ChatBridge consists of multiple modal-specific encoders and perceiver modules, and a transformer-decoder-based LLM. Inspired by Flamingo [3] and BLIP-2 [31], we introduce perceiver modules to bridge the encoders and the LLM. The perceiver summarizes the variable-length embeddings from each encoder\u2019s outputs within a given number of learnable query tokens. It thereby produces outputs of the same shape for all modalities. Also, as the number of query tokens is much smaller than the size of encoder features, it significantly reduces the computation cost in LLM. We instantiate the perceiver as a transformer decoder with learnable query tokens and the encoder embeddings as the input. The information Xi from the i-th modality is first fed into the encoder hi to extract its features. And then each perceiver \u03c1i with learnable queries Hi transforms them into a shared latent space: Zi = \u03c1i (Q = Hi, K, V = hi(Xi)) (1) Given the all multimodal inputs and human instruction Xinstruction as inputs, the LLM f generates the final response text sequence Y by: Y = f (Z1, Z2, ..., Zn, Xinstruction) (2) Specifically, we choose open-sourced Vicuna-13B [14] as the LLM, which is built upon LLaMA, and reported to achieve 90% of ChatGPT\u2019s quality as per GPT-4\u2019s evaluation. As for the modal-specific encoders, we choose ViT-G [51] as the vision encoder to encode images and videos, and BEAT [11] as the audio encoder to encoder audios. We sample 4 frames from each video and concatenate 3 Visual Encoder Perceiver Visual Encoder Perceiver Audio Encoder Perceiver LLM Language Response Language Instruction Image Video Audio \ud83d\udd25 \ud83d\udd25 \ud83d\udd25 Can you elaborate on the elements of the video? This is a video about a racing video game. In the video, the engine of a black racing car is driving fast on the track. Its engine whirled and roared loudly.There is also a white and blue sign and utility poles under a clear blue sky. Figure 2: Model architecture of ChatBridge. It consists of multiple modal-specific encoders and perceiver modules and an LLM. their respective frame features to form the video features, which are inputs of the video perceiver. Similarly, for each audio, we divide it into clips of 10-second intervals and concatenate the clip features to create the audio features. We use a shared perceiver for all modalities while each modality has its independent learnable query tokens. Due to limited computation resources, we only train the perceivers and their learnable query tokens while keeping the encoders and LLM frozen during the whole training process. 3.2 Two-stage Training Motivated by ChatGPT [40] that is built upon the pretrained GPT-3.5, ChatBridge also undergoes a two-stage training on large-scale language-paired two-modality data, and self-built multimodal instruction-following data. Stage1: Multimodal Alignment In the first stage, we pretrain ChatBridge to align each modality with language, which brings emergent multimodal correlation and collaboration abilities with LLM as a language catalyst. We leverage large-scale language-paired two-modality data for multimodal alignment training, including image-text, video-text, and audio-text pairs. Specifically, the training data consists of publicly available datasets of image-text pairs (including MS-COCO [34], SBU Captions [42], Conceptual Captions [49, 7], LAION-115M [48]), video-text pairs of Webvid10M [5], and audio-text pairs of WavCaps [38]. The raw unimodal data (i.e. images, videos, audios) is sequentially fed into the modality-specific encoder and perceiver to get unimodal embeddings. The input format of LLM in this training stage is: \u201c\" where is the sequence of unimodal embeddings from the perceiver, which can be regarded as soft prompts. The LLM directly tasks as input and is trained to output the corresponding text in the training samples. We train for 150k steps in this training stage with a batch size of 256 on 8 A100 GPUs. Stage2: Multimodal Instruction Tuning After aligning unimodal data with LLM, our model already has the basic ability to understand information from various modalities. However, the model still needs to improve its ability for processing different modalities and following human instructions. Some previous methods [43, 58, 53, 14] have proved that tuning the large model with instructions can help it to understand the intent of human beings. Inspired by these methods, in the second training stage, we further instruction-finetune ChatBridge to align the model with user intent on a wide range of multimodal tasks, enabling more effective zero-shot generalization on multimodal tasks. To this end, we carefully collect a multimodal instruction tuning dataset to funetune our model, where the instructions are multimodal containing text, image, video, and audio, while the responses are text only. A specific introduction to the data collection process will be provided in Section 3.3. We organized all the samples in a standardized format as below: 4 \u201c###Human: , ###Assistant: \" where is some human-craft template prompt that combines multiple sequences of unimodal embeddings from the perceivers. The LLM ingests the whole sequence and is trained to output the correct response . This training stage costs 10k steps with a batch size of 4k tokens on 8 A100 GPUs. 3.3 Multimodal Instruction Tuning Dataset \u2014 MULTIS We have developed a diverse dataset for multimodal instruction-tuning, named MULTIS, to instructionfinetune ChatBridge model. MULTIS consists of two distinct parts: task-specific data and multimodal chat data. The former presents standardized tasks that require concise responses, while the latter simulates real-world problem-solving scenarios by featuring open-ended dialogue between a human and a multimodal assistant. As shown in Figure 3, the whole collection of MULTIS covers 16 multimodal task categories and 15 source datasets. We hold out 6 datasets for model evaluation purposes. 3.3.1 Task-Specific Data We collect a vast array of publicly available human-annotated multimodal datasets and transform them into a unified instruction tuning format. Specifically, a plethora of common Question-Answering (QA) and captioning datasets that contain image-text, video-text, and audio-text pairs are assembled, encompassing VQAv2[22], VG-QA[28], COCO Caption[34], MSRVTTQA[52], MSRVTT Caption[52], AudioCaps[26]. For each task, we employ ChatGPT [40] to derive 10~15 unique instruction templates, which are then manually filtered and refined to ensure rationales and diversity are optimal. As the public datasets inherently favor shorter responses, we craft instruction template modifiers to specify the desired response style, such as short and brief for short-answer data, and a sentence and single sentence for caption data. 3.3.2 Multimodal Chat Data While task-specific data empowers the model towards completing standardized tasks, multimodal chat data offers real-world, open-ended dialogues demanding more sophisticated intent comprehension and contextual reasoning abilities, as well as providing more diverse, helpful, human-like responses. Despite the image-to-text chat dataset generated by LLaVA-Instruct-150K [35], chat data across other modalities remains limited. To this end, we have constructed a multimodal chat dataset that comprises both unimodal and multimodal inputs of image, video, and audio modalities. We adopt LLaVA-Instruct-150K [35] as the image chat data. To incorporate additional modalities, namely video, audio, and video-audio content, we design a pipeline with the assistance of offline open-source models[27, 10, 60, 31, 69] and ChatGPT[40], as is shown in Figure 4. Following [35], we generate three types of instruction-following data including conversation, detailed description, and COCO MSRVTT AudioCaps VQA2 VGQA GQA MSRVTT QA LLaVA Conversation Video Conversation Audio Conversation Audio-Video Conversation LLaVA Description Video Description Audio-Video Description LLaVA Reasoning Video Reasoning Image Modality Video Modality Audio Modality Video-Audio Modality Reasoning Captioning QA Conversation Detail Description MVSD QA VATEX Clothov2 Flickr Dialogue Held-Out Data TaskSpecific Data Multimodal Chat Data MUSIC AVQA AVSD VALOR Figure 3: Tasks groups included in MULTIS multimodal instruction tuning dataset. 5 OpenAI ChatGPT/GPT4 SAM Region Semantic Video GRiT Frame Dense Caption BLIP2 Frame Caption The track extends from the train station. Two people riding a motorcycle down a train track. A train on the track. Audio Audio Caption: Idling car, train blows horn and passes Audio Labels: ['Railroad car, train wagon', 'Train horn', 'Rail transport', 'Train', 'Clickety-clack'] Integrate With ChatGPT Question: What are the people in the video doing? === Answer: I can see they are riding a motorcycle down a track, wearing helmets. === Question: So are there any trains on the tracks? === Answer: Yes, I could see a train appear on the right side of the video, traveling from right to left. I can clearly hear the sound of its horn. === Question: Why does the train whistle? === Answer: When it approaches or moves away from the station, it needs to sound a horn as a warning. Conversation Prompt for Instruction Data Generation Figure 4: Illustration of the pipeline to collect the multimodal chat data for MULTIS. complex reasoning. We obtain our video from the MSRVTT [52] dataset, audio from AudioCaps [26], and video-audio from VALOR [12]. In order to prompt a text-only ChatGPT or GPT-4, we convert those non-textual modalities into textual descriptions. Specifically, for each video, we extract three frames and employ Semantic-SAM[10], GRIT[60], and BLIP-2 [31] to develop annotations of the frames at the region semantic, region caption, and frame caption levels. We then concatenate these frame annotations in temporal order with the human-annotated video captions utilizing hand-crafted prompts. For each audio, we use its audio captions and labels from the original dataset. We combine those collections of fine-grained and global descriptions with manually designed seed examples to query ChatGPT or GPT-4 in an in-context-learning manner. With the above pipeline, we collect video, audio, and video-audio multimodal chat samples comprising 24k in conversations, 18K in detailed descriptions, and 9k in complex reasoning. Overall, MULTIS contains 4.4M task-specific samples and 209k multimodal chat samples. 4 Experiments 4.1 Zero-shot Task Evaluation We evaluate ChatBridge\u2019s zero-shot ability on the held-out datasets of MULTIS\u2019s task-specific data. The model is instructed with unimodal and multimodal inputs along with text instructions to generate the corresponding answers. 1 Unimodal Input Tasks Unimodal input tasks comprise of question answering (QA) and captioning tasks on image-text (OKVQA [37], GQA [24], Flickr Captioning [45], nocaps [1]), video-text (MSVDQA [52], VATEX [57]), and audio-text (clothoV2 [19]) datasets. QA tasks require the model to predict a short answer about the unimodal input, while captioning tasks require outputting a sentence description. As shown in Table 1, ChatBridge exhibits remarkable performance on unimodal input tasks, indicating successful alignment of unimodal input and language. On image-text datasets, our method achieves comparable performance as advanced image-based methods, Flamingo [3] and BLIP-2 [31], and achieves new zero-shot state-of-the-art (SoTA) on Flickr30k and VATEX captioning tasks. Multimodal Input Tasks Multimodal input tasks necessitate the ability in interpreting, correlating, and reasoning about cross-modal information. We evaluate on multimodal input tasks encompassing audio-visual question answering (MUSIC-AVQA [30]), audio-visual dialogue (AVSD [2]), and audio-visual captioning (VALOR [12]) tasks. These tasks analyze videos containing both visual and auditory content. Since our model represents a pioneering approach in handling multi-modal inputs, we perform ablation on the input modalities, as shown in Table 2. Our model achieves 1Please note that here \"unimodal input\" and \"multimodal input\" refer to the use of text and either a single or multiple modalities in image, video, and audio. 6 Table 1: Zero-shot evaluation of SoTA methods on unimodal input tasks. We report the accuracy for QA tasks and the CIDEr [55] score for captioning tasks. Methods Image-Text Tasks Video-Text Tasks Audio-Text Tasks OKVQA GQA Flickr30k NoCaps MSVD VATEX Clothov2 QA QA Caption Caption QA Caption Caption Finetuned SoTA 66.1 [18] 65.1 [70] 67.4 [72] 121.6 [31] 60.0 [12] 95.8 [12] 48.8 [38] Flamingo-9B [3] 44.7 61.5 30.2 39.5 Flamingo-80B [3] 50.6 67.2 35.6 46.7 BLIP-2 (FlanT5-XXL) [17] 42.4 73.7 98.4 34.4 BLIP-2 (Vicuna-13B) [17] 32.3 71.6 103.9 20.3 ChatBridge w/o MULTIS 41.4 37.4 77.7 107.5 23.5 47.7 22.4 ChatBridge 45.2 41.8 82.5 115.7 45.3 48.9 26.2 Table 2: Zero-shot evaluation of the effect of multimodal inputs on multimodal input tasks. Input Modality AVSD Dialogue VALOR Captioning MUSIC-AVQA BLEU-4 CIDEr BLEU-4 CIDEr Acc. Finetuned SoTA 40.0 [44] 108.5 [44] 9.6 [12] 61.5 [12] 78.9 [12] Video 28.3 73.1 2.8 22.3 33.1 Audio 20.2 46.2 0.3 5.2 28.9 Video+Audio 29.8 75.4 4.2 24.7 43.0 better performance across all three tasks when incorporating both video and audio for solving these tasks, validating its capability to correlate and cooperate different modalities. We observe that video information has a greater influence on video-audio tasks, resulting in relatively higher performance for the video-only input modality as opposed to the audio-only counterpart. Nonetheless, amalgamating audio and video details enhances performance across all tasks to varying extents. Consequently, there is potential for further exploration in constructing a modality-balanced multimodal evaluation benchmark. Effect of Instruction Tuning with MULTIS Data We also conduct a comparison of the performance in zero-shot task evaluation before and after applying our multimodal instruction tuning training stage. As depicted in Table 1, after instruction tuning, the model demonstrates varying degrees of performance improvement across different downstream tasks. Specifically, we observe a 21.8% boost in accuracy in MSVDQA, a 3.8% improvement in OKVQA, and a 3.6% improvement in GQA. Moreover, our approach also yields advancements in captioning tasks concerning Cider Score. The experimental results demonstrate that the utilization of multimodal instruction tuning data within MULTIS can facilitate the model\u2019s abilities in integrating diverse modalities and generalizing effectively to unseen tasks. 4.2 Multimodal Chat Evaluation Chat with Unimodal Input We conduct a comparative analysis of our model with three imagebased LLMs: BLIP-2[31], LLaVA[35], and MiniGPT-4[73]. The evaluation is based on GPT-4 generated image-text chat data provided by LLaVA[35], comprising 90 samples. We follow the evaluation protocol proposed by LLaVA[35], where GPT-4 is used to evaluate the helpfulness, relevance, accuracy, and level of detail of each model\u2019s responses. The responses are scored on a scale of 1 to 10, where a higher score indicates superior overall performance. As depicted in Table 3a, our method surpasses other approaches that also employ a frozen LLM, i.e. BLIP-2 and MiniGPT-4. Nevertheless, our approach still falls short of the performance demonstrated by LLaVA, which fine-tuned the LLM end-to-end. As an avenue for future research, enhancing the model\u2019s capability to process unimodal input with efficacy could be considered. Chat with Multimodal Input We also examine the ability of ChatBridge to perform humanassistant chat on multimodal inputs. Firstly, we construct a high-quality test set of multimodal chat data by following a similar pipeline as our MULTIS. Specifically, we employ GPT-4 [41] to produce a pool of candidate samples, where we manually choose 90 high-quality samples based on the following criteria: diversity of instructions, correctness of responses, and reliance on different modalities. The selected samples are categorized into three types: reasoning (solving problems through logical thinking and analysis), understanding (comprehending information and interpreting its meaning), and knowledge-based (applying specific knowledge to answer questions accurately). 7 Table 3: Multimodal chat evaluation results. Response qualities of different methods are assessed by GPT-4 (text-only). (a) Multimodal chat evaluation on image-text chat data. Methods ReasonDescripConverOverall -ing -tion -sation LLaVA [35] 8.87 7.07 7.63 7.86 BLIP-2 [31] 5.80 6.00 7.03 6.28 MiniGPT-4 [73] 7.53 7.27 5.63 6.14 ChatBridge 7.17 6.23 7.23 6.88 (b) Multimodal chat evaluation on video-audio-text chat data. Input UnderstReasonKnowOverall Modality -anding -ing -ledge Video 5.86 5.27 7.70 6.15 Audio 2.43 3.77 8.09 4.24 Video+Audio 6.10 6.73 8.43 6.87 Multimodal Chatbots Round Table Discussion Explain why this meme is funny. This meme is a play on the phrase \"Monday, just Monday\". it shows a dog laying on the floor with its head resting on its paws, looking dejected. The caption reads \"Monday. just Monday\u201d which is a common expression used to express the feeling of dread or exhaustion that many people feel on the first day of the week. The meme is likely intended to be humorous and to poke fun at the idea that Mondays are particularly difficult or challenging.it's a funny meme that is used to express the feeling of exhaustion or dread that many people feel on the first day of the week. This meme is funny because it shows a dog sleeping on the floor with the caption, \u201cMonday just Monday.\u201d It implies that the dog is so tired from the weekend that it can\u2019t even get up on Monday. The humor comes from the contrast between the dog\u2019s lazy behavior and the expectation that it should be more active on a weekday. a dog laying down on a wooden floor with the caption monday just monday. ChatBridge Mini GPT-4 LLaVA BLIP-2 Multimodal Chatbots Round Table Discussion The meme is funny because it features a bulldog puppy lying on a wooden floor with the words \"Just Monday...\" written on it. The puppy's sad expression and posture convey the relatable sentiment that many people experience on Mondays the start of a new workweek, exhaustion, or general dissatisfaction with the beginning of the week. The meme plays on the idea that even a puppy, usually associated with happiness and energy, can be affected by the less-than-enthusiastic feelings connected to Monday mornings. This juxtaposition creates humor and connects with the audience on a relatable level. Figure 5: A comparison of SoTA models\u2019 responses to an image-based reasoning task. To illustrate the effectiveness of our model in processing multimodal inputs, we primarily compare the performance of our model when exposed to different modality inputs. Similar to the above evaluation, we use GPT-4 to score the responses of each method. According to the evaluation results reported in Table 3b, the model incorporating both audio and video inputs outperforms the model relying solely on unimodal inputs on all three sub-tasks. This finding underscores the model\u2019s ability to effectively harness the complementary information inherent in each modality, thereby facilitating a more comprehensive comprehension of the data. Qualitative Analysis Furthermore, Figure 5 depicts ChatBridge\u2019s ability to generate more comprehensive responses when both video and audio modalities are present, providing evidence for its competence in co-relating and cooperating different modalities. Finally, the multi-round conversation cases illustrated in Figure 6 and Figure 8 demonstrate ChatBridge\u2019s efficacy in following human instructions and comprehending both unimodal and multimodal contents. In Figure 5, we show an image-base reasoning case to SoTA image-based LLMs, and compare their responses. Our ChatBridge accurately recognizes the text in the image and almost perfectly understands the humor of the image. Furthermore, Figure5 depicts ChatBridge\u2019s ability to generate more comprehensive responses when both video and audio modalities are present, providing evidence for its competence in co-relating and cooperating different modalities. Finally, the multi-round conversation cases illustrated in Figure 6 and Figure 8 demonstrate ChatBridge\u2019s efficacy in following human instructions and comprehending both unimodal and multimodal contents. 5 Discussions This paper proposes ChatBridge, a multimodal language model capable of interpreting, correlating, and reasoning about various modalities through two-modality data paired with language. We introduce MULTIS, a multimodal instruction tuning dataset, to instruction-tune our model, which coveres 8 What activities are the people involved in? The video excerpt is from The Godfather. Describe this audio. This audio is a recording of a train passing by, likely on a track. The sound is a combination of the train\u2019s engine, wheels on the tracks, and some background noise. The recording is fairly clear and the sound of the train is prominent. ChatBridge Audio Content: (A train is coming in.) ChatBridge What other sounds do trains usually make? Trains can make a variety of sounds, including the sound of the engine, the sound of the wheels on the tracks, the sound of the brakes being applied, the sound of the whistle or horn, and the sound of the cars (boxcars, tankers, etc.) clanking or rattling. Additionally, trains can make noise from the movement of the train, such as the swaying of the cars or the rattling of the windows. ChatBridge What does it mean when a train blows its whistle? ChatBridge A train blowing its whistle is typically a warning to alert pedestrians, other trains, or other vehicles that the train is approaching a crossing or a station. The train\u2019s whistle is used as a safety measure to ensure that people are aware of the train\u2019s presence and take appropriate action to avoid being hit by the train. Can you provide me with more background information about this movie? The Godfather is a classic American crime film directed by Francis Ford Coppola and released in 1972. It is based on the novel of the same name by Mario Puzo and follows the story of the Corleone family, a powerful mafia clan in New York City. The movie is widely regarded as one of the greatest films of all time and has won numerous awards, including seven Academy Awards. ChatBridge Who is the male actor appearing in the video? The male actor appearing in the video is Marlon Brando, who played the character of Don Vito Corleone in The Godfather. Brando\u2019s portrayal of the powerful and complex patriarch of the Corleone family is widely regarded as one of the most iconic performances in film history. His performance earned him an Academy Award for Best Actor. ChatBridge Figure 6: Multi-round conversation cases with audio or video inputs. What is the states of the plane? The plane is parking on the tarmac of an airport. It is surrounded by a bridge, a building, and a car parked nearby. The background audio is a mix of wind and mechanical sounds, indicating that it is being activated. ChatBridge ChatBridge w/o Audio ChatBridge w/o Video A jet airliner parked is parking at the gate, waiting to depart. There are several other planes parked nearby. The airport is bustling with activity, with people walking around and possibly boarding or disembarking from the planes. The engine of the plane is roaring. Audio Content: (A jet engine idles as the wind blows.) Figure 7: A comparison of ChatBridge\u2019s responses to different input modalities with a video-audio input. What\u2019s the overall vibe in the video like? The overall vibe in the video is a mix of a dark, eerie atmosphere and a sense of movement, as the video features a man walking through a city at night, with a car driving in the background. The audio adds to the atmosphere, creating a sense of suspense and tension. The combination of the visuals and audio creates a unique and immersive experience for the viewer. Audio Content: (Instrumental music playing as a woman speaks.) ChatBridge Tell me more about the background audio. The background audio in the video is a mix of music and sound effects, which contribute to the eerie atmosphere and tension. ChatBridge Are the taxi and bus traveling in different directions in the video? Answer me in Chinese. ChatBridge \u5728\u89c6\u9891\u4e2d\uff0c\u516c\u4ea4\u8f66\u548c\u51fa\u79df\u8f66\u7684\u9a7e\u9a76\u65b9\u5411\u4e0d\u540c\u3002\u516c\u4ea4\u8f66\u4ece\u5de6\u5230\u53f3\u884c \u9a76\uff0c\u800c\u51fa\u79df\u8f66\u4ece\u53f3\u5230\u5de6\u884c\u9a76\u3002 Figure 8: A multi-round conversation case with a video-audio input. a wide range of multimodal tasks in text, image, video, and audio modalities. Consequently, our model exhibits strong and noteworthy quantitative results on zero-shot multimodal tasks. Moreover, qualitative examples demonstrate ChatBridge\u2019s diverse capabilities in following multimodal instructions, such as complex multimodal reasoning, knowledge-grounded multimodal understanding, and multi-turn conversations. These contributions and findings may pave the way for future research into building general-purpose multimodal assistants. Limitations Here, we describe some limitations of our model as well as opportunities for further improving our model. Specifically, we note the following: 1) We observe that our model exhibits weaknesses in understanding and grounding long-range videos and audios, necessitating a more precise temporal modeling approach. 2) Our framework can be extended to incorporate additional modalities, such as sketch and point cloud. 3) Though the frozen modules in our framework alleviate computational burden, they may lead to insufficient performance and also introduce prior biases from pretrained models. 9" + }, + { + "url": "http://arxiv.org/abs/2312.03088v1", + "title": "LLMs for Multi-Modal Knowledge Extraction and Analysis in Intelligence/Safety-Critical Applications", + "abstract": "Large Language Models have seen rapid progress in capability in recent years;\nthis progress has been accelerating and their capabilities, measured by various\nbenchmarks, are beginning to approach those of humans. There is a strong demand\nto use such models in a wide variety of applications but, due to unresolved\nvulnerabilities and limitations, great care needs to be used before applying\nthem to intelligence and safety-critical applications. This paper reviews\nrecent literature related to LLM assessment and vulnerabilities to synthesize\nthe current research landscape and to help understand what advances are most\ncritical to enable use of of these technologies in intelligence and\nsafety-critical applications. The vulnerabilities are broken down into ten\nhigh-level categories and overlaid onto a high-level life cycle of an LLM. Some\ngeneral categories of mitigations are reviewed.", + "authors": "Brett Israelsen, Soumalya Sarkar", + "published": "2023-12-05", + "updated": "2023-12-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "With the advent of GPT-3 there seems to have been a rapid acceleration in the observed capabilities of large language models (LLMs) being released. This explosion of capability came as somewhat of a surprise because the main \u2018in- novation\u2019 was to increase the number of parameters being used in the models (which now number in the hundreds of billions [25]). This resulted in \u2018emergent\u2019 capabilities, or capabilities that were not explicitly sought during the design and training process [25]. This increase in capability has captured the interest and imaginations of the public and technologists alike, and propelled LLMs into the limelight. While the extent to which LLMs will be integrated into our lives is yet to be seen, it is clear that they will have a profound impact. 1 arXiv:2312.03088v1 [cs.CL] 5 Dec 2023 One crucial roadblock is that LLMs are still quite limited in their capabilities to perform as expected. These limitations and vulnerabilities are not new, they had been identified before the most recent generation of highly-capable LLMs, but began to receive more attention as scientists and researchers began consid- ering how LLMs might be applied to various safety-critical applications. In such applications there is very little tolerance for behaviors that cannot be predicted and fully understood; safety-critical applications also require that vulnerabilities be addressed before they can be deployed into real-world environments where they can be subject to non-ideal circumstances, and even adversarial attacks. Examples of some use cases that we consider to qualify as \u2018safety-critical\u2019 in- clude: (i) Automated requirement generation for software, (ii) Digital assistant for planning, (iii) reasoning and decision making, (iv) multi-modal inspection, (v) interactive ISR with Intelligence analysis, (vi) Contextualized summariza- tion for intelligence analysis, and (vii) survey analytics. This paper reviews recent LLM literature on the vulnerabilities and limi- tations of LLMs in order to identify the current understanding, and what ap- proaches are being taken to address them. We further discuss implications to intelligence and safety-critical applications.", + "main_content": "There has been a recent explosion in literature focusing on vulnerabilities of LLMs due to the surprising increase in capability shown initially by GPT and its subsequent versions as well as other similar models released by companies like Google, Facebook, Anthropic and others. This interest isn\u2019t new, but has definitely increased in recent months. In the context of LLMs we define vulnerabilities as: properties or behaviors of LLMs (generally, or individually) that make them prone to degraded performance through attack, misuse, or normal operation. A review of some of the recent literature was performed with the goal of better understanding and synthesizing current understanding about the vulnerabilities of LLMs; the key results of this review are included in Section 2.1. Sections 2.2 and 2.3 offer a quick overview of the models and datasets used in the reviewed papers. 2.1 Literature Synthesis There are many different lists and assessments of the limitations and vulnerabilities of LLMs that have been produced. The amount of literature referencing such topics has only increased recently. The main goal of this review was to synthesize some of the existing literature to get a better idea of the \u2018landscape\u2019. Based on review of approximately 20 recent LLM papers, Figure 1 gives an illustration of some high-level classes of vulnerabilities and the stages of the LLM life cycle where they are typically manifest. 2 Figure 1: Illustration of LLM vulnerabilities. The vulnerabilities have been overlaid on a high-level life cycle of an LLM to illustrate phases of development where vulnerabilities might be addressed. Each of these categories are made up of smaller sub-categories that are listed in Table 1, and individual references are given for each of the sub-categories. Short definitions of each vulnerability class follow. Alignment The degree to which the underlying behavior (driven by the training objectives, loss and reward functions) of the model aligns (or matches) the behavior expected/desired by stake-holders. This category also includes ethical considerations since these are often assumed (wrongly) to be met. Reliability The degree to which the model is capable of performing in a reliable manner. Typical issues that can affect reliability include: \u2018hallucination\u2019, \u2018miscalibration\u2019, and \u2018misinformation\u2019 (outputs that are incorrect). Safety and Privacy This category encompasses behaviors concerned with critical undesirable behaviors. Having a model exhibit such behaviors, even rarely, is not acceptable. Well-known behaviors in this category include: \u2018emergence\u2019 (un-programmed and un-tested behaviors), \u2018information leaking\u2019 (divulging of sensitive or private information), and \u2018legality\u2019 (model providing outputs that support illegal, or otherwise forbidden, behaviors). Fairness Includes model behaviors that are concerned with consistently reliable and balanced treatment across different subgroups including prompts (i.e. different languages and dialects) and responses (i.e. not referring to certain relevant information more often than other relevant information). This class includes behaviors such as: \u2018bias\u2019, \u2018injustice\u2019, and \u2018performance disparity\u2019 (better performance in some contexts than others). Social Norm Since these systems are interacting with individuals who live in a society, it is important that they accurately reflect the values of that society. This category is related to behaviors such as \u2018cultural insensitivity\u2019, 3 \u2018toxicity\u2019, and \u2018unawareness of emotions\u2019. Such behaviors are at the center of interpersonal relationships; those who violate such norms typically pay a social price, and deployed LLM models should comply as well. Cyber-Related At the most basic level, LLMs share the same vulnerabilities of all information-technology systems. Cybersecurity vulnerabilities are fairly well understood, but have unique ways in which they can be manifest in the LLM life cycle. Some typical vulnerabilities included in this class include \u2018prompt injection\u2019 (inserting certain text into prompts that override LLM controls), \u2018insecure output handling\u2019 (giving the LLM excessive, and unsafe, amounts of freedom in outputs), and \u2018model denial of service\u2019 (ability of third-parties to deny service to others through overwhelming the model servers with requests). Misuse This class of vulnerabilities is based on the ways in which LLMs can be misused (on purpose or not). Some examples of misuse include \u2018copyright infringements\u2019 (where the LLM produces content that infringes on copyright), \u2018propaganda\u2019 (where an LLM might be utilized to produce highly believable, but misleading, information), and \u2018social engineering\u2019 (when LLM outputs might be used to impersonate real people in order to try to manipulate them). Robustness This class includes vulnerabilities related to circumstances that may cause an LLM to perform differently in certain situations than in others. This includes various considerations about training data (including poisoning and prompt attacks, dynamic data such as real-time news, and distributions shifts), this can also include possible performance changes observed when utilizing \u2018personas\u2019 (asking the LLM to act as if it were a certain individual, or had certain expertise). Explainability This category is concerned with the ability of human users to understand the reasoning process of the LLM. When an LLM is asked questions that require reasoning it is important to know the limitations of the LLM to reason. Also, more generally, in many critical circumstances it is crucial for those with decision-making authority to be able to check and verify certain information before proceeding. Supply Chain As LLMs become more complex it is increasingly common that different pieces are sourced from third-parties. This results in the overall performance being reliant on the cumulative performance of many disparate parts in the supply chain. This means quality-control and standards are necessary to ensure all sub-components (and their inevitable updates) can be tested for quality. Also, as \u2018foundation models\u2019 become more common, there will likely be many derivative systems that heavily rely on them possibly exposing them to any vulnerability of the foundation model. 4 Each class of vulnerabilities is contained within a light-blue box; the width of the box varies to overlap with the stage of the LLM life cycle where the vulnerability is most likely to be manifest. It is important to note that the assignment to life cycle stages is not exact, but should help highlight \u2018typical\u2019 areas of concern. As an example, the \u2018Robustness\u2019 category spans all three phases of the LLM life cycle because robustness vulnerabilities are manifest in each of these phases. In the \u2018Data Preparation\u2019 stage: poisoning, and data curation; in the \u2018Training\u2019 stage: distribution shifts, and (again) data curation; and in the \u2018Deployment\u2019 stage: prompt attacks, and distribution shifts. In contrast the \u2018Misuse\u2019 category sits only in the \u2018Deployment\u2019 stage because that is the only time in which the LLM is actually used (issuing responses for prompts). Table 1: Vulnerabilities broken down by category Category Vulnerabilities Papers alignment alignment [3], [12], [14], [22], [27] ethics [5] cyber excessive agency [18] insecure output handling [18] insecure plugin design [18] model denial of service [18] model theft [18] prompt injection [12], [18], [30] explainability causal reasoning [16] interpretability [16] logical reasoning [16] fairness bias [4], [16], [21], [23] inequity [5] injustice [16] lack of diversity [4] performance disparity [16] preference bias [16] stereotyping [16] misuse copyright [16] cyberattack [16] mistrust [26] misuse [5], [7], [26] overreliance [18] propaganda [16] social-engineering [16] reliability hallucination [16], [17], [19] inconsistency [16] miscalibration [1], [2], [8], [10], [11], [13], [15], [16], [22], [29] misinformation [16], [26] sychopancy [16] 5 Table 1 continued from previous page Category Vulnerabilities Papers reliability unfaithful explanations [24] robustness data curation [4] distribution shifts [16] dynamic data [4] imitating personas [27] interventional effect [16] paradigm shifts [16] poisoning attacks [16] prompt attacks [16] training data poisoning [18] safety/privacy adult content [16] emergence [9], [23], [25] information leaking [26] legality [5] mental health [16] privacy violation [16] sensitive information disclosure [18] unlawful conduct [16] violence [16] social norm cultural insensitivity [16] toxicity [16], [23] unawareness of emotions [16] supply chain economic incentives [5] environment [5] homogenization [23] lack of industry standards [25] supply chain vulnerabilities [18] 2.2 Referenced Models In the set of papers we reviewed there were thirty-two models that were discussed. These models are listed in Table 21 with a short description and references to the particular papers that specifically mentioned them. These models represent a reasonably large subset of the state-of-the-art at the time of writing of this paper (Fall 2023). Some of the models represented are \u2018fine-tuned\u2019 versions of other models (i.e. \u2018alpaca\u2019 is fine-tuned from \u2018LLaMA\u2019, and \u2018deberta\u2019 is extended from \u2018RoBERTa\u2019. Some of the models (like \u2018llama\u2019 and \u2018opt\u2019) are open source, and are able to be analyzed in more detail than other proprietary models (i.e. the \u2018gpt\u2019 family, and \u2018claude\u2019). Judging by number of references in the surveyed papers, the \u2018gpt\u2019 models and \u2018llama\u2019 are the most popular. 1One \u2018model\u2019 listed in Table 2 is called \u2018general\u2019. This represents papers that focused on topics that are general to all LLMs and not specific to particular implementations and/or architectures. 6 Table 2: Summary of LLM models included in surveyed papers Model Description Papers alpaca fine-tuned from LLaMA [1], [6], [14] bert family of language models introduced by Google in 2018 [6] bloom fine-tuned from Megatron-LM [6] chatglm open bilingual language model based on GLM [30] chatgpt OpenAI proprietary model [21] claude Anthropic proprietary model [24], [28], [30] codegen autoregressive LM for program synthesis [10] custom decoder-only transformer model [3] davinci-002 GPT-3.5 variant using instruction tuning [8] davinci-003 GPT-3.5 variant using RLHF [1], [8] deberta extension of RoBERTa [6] falcon open model released by TII [30] flan Google model with instruction tuning [6] general not specific to particular architecture [2], [4], [5], [7], [18], [22], [23], [25], [26], [27] google lm collection of Google LMs [9] gopher 280 Billion parameter model from Google [9] gpt-2 Open AI; 2019 release [6], [10] gpt-3 Open AI; 2020 release, instruction tuning and training on code [1], [9], [10], [15], [17], [24], [28], [29], [30] gpt-3.5 Open AI; 2022 release, supervised instruction tuning and RLHF [1], [10], [15], [24], [28], [29], [30] gpt-4 Open AI; 2023 release, increased memory, includes multi-modal inputs [1], [19], [28], [29], [30] gpt-j Eleuther AI [6] gpt-neo Eleuther AI [14] guanaco based off of LLaMA model family; used LoRA fine-tuning [30] incoder code generation LM decoder only transformer [10] llama Meta open-source LLM; released 2023 [6], [8], [10], [12], [13], [14], [15], [17], [30] mpt open source LLM from MosaicML [30] opt Open Pre-trained Transformer architecture; proposed by Meta [8], [11], [15] palm-2 Google\u2019s \u201cnext gen\u201d LM; 2023 release [20], [28] roberta evolution of BERT with improved training [6] santacoder code generation LM trained on Github code [10] 7 Table 2 continued from previous page Model Description Papers t5 Text-to-Text Transfer Transformer; Google [6] vicuna open-source fune-tuned on LLaMA [1], [14], [29], [30] 2.3 Referenced Datasets Varied datasets and benchmarks are necessary for evaluating the performance, and limitations, of LLMs. In the papers that we studied there were 42 datasets referenced. A list of them can be found in Table 32. While far from exhaustive, these datasets span several different common and relevant use-cases for LLMs. Such use cases include the more typical and general question-answering (QA), text summarization, and language understanding tasks, but also include those drawn from more niche datasets such as adversarial robustness, business ethics, math, science, coding, robotics, music, and law. Most of the listed datasets could be considered \u201cstandard\u201d AI/ML datasets. There is a small subset that are more typically used in human psychology and behavioral science (i.e. \u2018big five personality\u2019, \u2018g7 members political\u2019, \u2018myersbriggs\u2019, and \u2018political-compass\u2019). These tests were used by some researchers to assess traits and alignment normally assigned only to humans in an effort to better understand the behavior of LLMs. This highlights a crucial point: with LLMs having enough capability to take tests traditionally meant for humans there is need to extend evaluation from tests traditionally meant for \u2018machines\u2019. Previous datasets provide useful assessments in some ways, but come up lacking in others. Finally, several \u2018meta\u2019 or \u2018conglomerate\u2019 datasets have started to emerge. These larger datasets include a collection of smaller datasets. These kind of benchmark datasets are useful because, while in theory it would be nice to test LLM models on every possible use case, pragmatically the number of tests needs to be scaled with risk and focused on use-case specific vulnerabilities. Examples of such benchmarks are: \u2018big bench\u2019: The Beyond Imitation Game (BIG) Bench dataset includes many smaller datasets that are meant to be used to evaluate the extent to which a model has \u2018understanding\u2019 of certain concepts and is not solely imitating understanding. \u2018socket\u2018: The Social Knowledge Evaluations Tests (SocKET) are a conglomeration of evaluations focused on quantifying the degree to which a model has \u2018understanding\u2019 of social knowledge including topics such as empathy and humor. 2One \u2018dataset\u2019 listed in Table 3 is called \u2018general\u2019. This represents analysis of concepts generally applicable across all datasets. 8 Table 3: Summary of datasets used in surveyed papers Dataset Description Papers advbench dataset for adversarial robustness [12], [30] bertology tools for accessing inner representations of BERT [9] BBQ hand build bias benchmark for QA [24] big bench Beyond the Imitation Game (BIG) collaborative benchmark for evaluating LLMs [3], [9], [24], [25] big five personality personality test proposed in 1949 [21] biz-ethics subset of MMLU [29] chords-db javascript database of string instrument chords [28] cnn/daily mail text summarization dataset [10] compas inmate recidivism risk score [9], [21] coqa converastional question answering challenge [8], [11], [15] custom custom-made dataset [28] dark factor personality test for quantifying aversive personality traits [21] dateund dataset designed to test LLM ability to understand dates [29] eli5-category dataset for long-form question answering [10] folio expert-written, open-domain, logically complex and diverse dataset for natural language reasoning with first-order logic [28] g7 members political political affiliation tests from G7 member states [21] general analysis/discussion of concepts generally applicable across datasets [2], [4], [5], [7], [18], [22], [23], [26], [27] gsm8k grade school math word problems [29] hardware tabletop rearrangement robotic rearrangement task [20] hellaswag dataset to study grounded commonsense inference [3] hh helpful and harmless RLHF dataset [3], [14] humaneval programming problems to evaluate code generation [10], [28] kuq Known-Unknown Questions [1] 9 Table 3 continued from previous page Dataset Description Papers lambada dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task [3] mbpp crowd-sourced python programming problems [10] mmlu Massive Multitask Language Understanding (MMLU) benchmark to measure knowledge acquired by LLM [13] movielens movie ratings dataset [9] myers-briggs well-known personality test [21] natural questions dataset of user questions with short and long form answers [15] nejm case reports natural language medical case reports with voting on diagnosis [19] political compass test for evaluation of political affiliation [21] prf-law subset of MMLU [29] real toxicity prompts dataset of sentence snippets for evaluation of toxicity [9] sciq science exam questions [8] socket dataset for evaluating sociability of NLP models [6] strategyqa benchmark where required reasoning steps are implicit in the question [29] trivia qa reading comprehension dataset [8], [11], [15] truthfulqa benchmark for testing whether an LM is truthful in generated responses [25] wic word-in-context dataset; tests whether LM can identify meaning of a word [25] wikibio dataset of biographies from Wikipedia [17] wiki-qa annotated set of question and answer paris [10] wmt 2014 machine translation dataset [10] 10 3 Mitigation and Detection Regarding LLM vulnerabilities the body of current literature presents many more problems than solutions; this indicates the phase of research that the community is currently in. New methods for handling vulnerabilities are being rapidly being proposed, and we expect this landscape to continue to change rapidly. Mitigation of vulnerabilities/poor performance is tightly-linked with the ability to detect such problems. Generally, literature discusses methods for detection of undesirable behavior, and then assumes using that as feedback to modify the model. Many of the new detection methods rely on quantifying performance on specialized datasets (see Section 2.3) developed with the specific purpose of testing against a particular vulnerability. Beyond that, the literature reviewed discusses some nascent approaches in the following categories: Uncertainty Quantification \u2013 One critical capability is to be able to quantify the uncertainty an LLM has in generated responses. Standard LLMs do not include this capability \u2018off the shelf\u2019. There are a couple promising approaches to address this: \u2022 Conformal Prediction (CP) \u2013 Conformal prediction is a method for estimating the uncertainty in a model\u2019s output [2]; CP is not a technology specific to LLMs, it has been used in other domains as well. The key to CP is to use a \u2018holdout\u2019 set that is used to calibrate a \u2018score quantile\u2019; in a classification task the cumulative set of classes whose softmax scores are less than the score quantile are returned; the fewer classes in this set, the less uncertainty. \u2022 Quantifying response variability using Semantic Entropy (see [11],[29]) \u2013 developing a measure of semantic similarity to evaluate the range of similar responses (where \u2018similar\u2019 can be defined in different ways) produced by a LLM to the same prompt. If the LLM produces responses that have widely varying semantic similarity the uncertainty can be seen as high. \u2022 Response Ranking, and LLM-Based uncertainty estimates (see [29]) \u2013 some investigation has occurred in evaluating the extent to which an LLM is capable of assessing its own uncertainty by way of prompting the LLM to include such information in its response (for example: \u201cprovide your confidence between 0-100% in the response\u201d). In the case of verbalized confidence (where the LLM is supposed to explicitly state confidence in its response), LLMs tend to be overconfident; however using other prompting strategies such as \u2018Top-K\u2019 (ranking the top K answers), or Chain-of-Thought (asking for the explicit reasoning process that lead to an answer) seems to improve the outcomes. Alignment \u2013 Methods for coercing the underlying behavior of the LLM to align 11 with human expectations. This typically applies to auxiliary behaviors that we not directly trained originally. \u2022 Alignment Prompts \u2013 This involves prepending special kinds of prompts to help direct the LLM to behave \u2018better\u2019 (i.e. avoiding bias in its response). This method is not recommended because it may encourage the model to present an \u2018aligned facade\u2019 without actually changing its alignment [3]. \u2022 Context Distillation \u2013 instead of using aligning prompts (that have several drawbacks), you can fine-tune on the aligning prompt. This would avoid having to take limited prompt space with pre-specified text, and also fine-tuning actually changes the underlying behaviors of the LLM getting away from the \u2018aligned facade\u2018 mentioned above [3] \u2022 Preference Model Training \u2013 It is useful to have a well-catered dataset to help create a \u2018preference model\u2019 that can be used for reinforcement learning improvement of LLM behavior. This is called reinforcement learning from human feedback (RLHF) and is currently state-of-theart for LLM alignment. Askell 2021 [3] found that using \u2018ranked preference\u2019 for this phase is superior to \u2018binary preference\u2019. \u2022 Preference Model Pretraining \u2013 This approach to alignment involves training on less-tailored preference datasets (such as Stack-Exchange ranked responses); this method is attractive largely due to the increased availability of data, where fine-tuned alignment datasets typically require expensive hand-labeled datasets. This step would be followed by a fine-tuning stage later. It has been noted that binary preferences seem to work better at this stage [3]. \u2022 RAIN (see [14]) \u2013 introduces ability to rewind autoregressive outputs if a specific token leads to generation of undesired output downstream. This is a method to allow self-alignment assuming you have the capability of detecting poor outputs. Processes and Governance \u2013 Besides traditional \u2018technical\u2019 solutions it can be just as, or more, effective to implement processes and governance structures to ensure vulnerabilities are reduced or eliminated (see [22] and [26]). As an example mentioned in [26] is that the risk of \u2018misinformation\u2019 form LLMs is such a large-scale problem that it is unlikely that technical solutions alone will suffice; regulation, policy, and other society-level controls are necessary to effectively mitigate this problem. 3.1 Persistence of Vulnerabilities It is important to recognize that there are fundamental limitations to the extent to which these models can actually be aligned. It has been shown that there exist prompts that can trigger any behavior that has a finite probability 12 being exhibited [27]. This means that if behaviors aren\u2019t eliminated during the alignment process they will not be guaranteed safe against prompt attacks. The same paper also found that using \u2018personas\u2019 (using prompts to ask the LLM to respond as if it were \u2018an expert\u2019 or some other non-LLM role) can serve as a shortcut to bypassing alignment training. One positive takeaway from [27] is that the better aligned the model, the longer the prompt has to be to elicit nonaligned responses; in this way \u2018aligning prompts\u2019 (prepended text added to all prompts) were shown fairly effective especially when the overall prompt length is limited (see more in-depth discussion on alignment in the previous section). 4 Summary and Conclusion The current generation of LLMs have greatly improved performance due to emergent capabilities (at least this seems the case from current thought) that have enable marked improvement an various different kinds of tasks. This increase of capability came as a surprise to many, and has led many to seriously consider adoption of LLMs to aid or enable various technologies. In the process many limitations and vulnerabilities have been highlighted or discovered. These vulnerabilities impede the possible application of LLMs in many technologies, this is especially true with respect to usage in intelligence and safety-critical use-cases where high levels of assurance are required to ensure performance lies within expected bounds. This report highlights some of the categories of vulnerabilities, and where in the life-cycle they are likely to fall. Several mitigation strategies have also been identified. This information should be helpful in understanding the current research landscape, and guiding further research efforts. 13" + }, + { + "url": "http://arxiv.org/abs/2306.15401v5", + "title": "Explainable Multimodal Emotion Reasoning", + "abstract": "Multimodal emotion recognition is an active research topic in artificial\nintelligence. Its main goal is to integrate multi-modalities to identify human\nemotional states. Current works generally assume accurate emotion labels for\nbenchmark datasets and focus on developing more effective architectures.\nHowever, emotions have inherent ambiguity and subjectivity. To obtain more\nreliable labels, existing datasets usually restrict the label space to some\nbasic categories, then hire multiple annotators and use majority voting to\nselect the most likely label. However, this process may cause some correct but\nnon-candidate or non-majority labels to be ignored. To improve reliability\nwithout ignoring subtle emotions, we propose a new task called \"Explainable\nMultimodal Emotion Reasoning (EMER)\". In contrast to traditional tasks that\nfocus on predicting emotions, EMER takes a step further by providing\nexplanations for these predictions. Through this task, we can extract more\nreliable labels since each label has a certain basis. Meanwhile, we use LLMs to\ndisambiguate unimodal descriptions and generate more complete multimodal EMER\ndescriptions. From them, we can extract more subtle labels, providing a\npromising approach for open-vocabulary emotion recognition. This paper presents\nour initial efforts, where we introduce a new dataset, establish baselines, and\ndefine evaluation metrics. In addition, EMER can also be used as a benchmark\ndataset to evaluate the audio-video-text understanding capabilities of\nmultimodal LLMs. To facilitate further research, we will make the code and data\navailable at: https://github.com/zeroQiaoba/AffectGPT.", + "authors": "Zheng Lian, Licai Sun, Haiyang Sun, Hao Gu, Zhuofan Wen, Siyuan Zhang, Shun Chen, Mingyu Xu, Ke Xu, Lan Chen, Jiangyan Yi, Bin Liu, Jianhua Tao", + "published": "2023-06-27", + "updated": "2024-04-24", + "primary_cat": "cs.MM", + "cats": [ + "cs.MM", + "cs.HC" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "Multimodal emotion recognition has experienced rapid development in recent years [1, 2]. Current works predominantly revolve around two aspects: the collection of larger and more realistic datasets [3] and the development of more effective architectures [4]. Despite promising progress, multimodal emotion recognition suffers from label ambiguity. It arises due to the inherent subjectivity of emotions, where different annotators may assign distinct labels to the same video. Label ambiguity results in potentially inaccurate labels of existing datasets, bringing obstacles to the systems developed on these datasets to meet requirements in practical applications. To enhance label reliability, current works mainly focus on restricting the label space to reduce the annotation diversity, while increasing the number of annotators and using the majority vote to determine the most likely label [5, 6]. However, this approach may exclude correct but non-dominant or non-candidate labels, thereby limiting the model\u2019s ability to describe subtle emotions. To obtain reliable labels without ignoring subtle ones, we introduce a new task called \u201cExplainable Multimodal Emotional Reasoning (EMER)\u201d. Unlike traditional tasks that focus on predicting emo- Preprint. Under review. arXiv:2306.15401v5 [cs.MM] 24 Apr 2024 tions, EMER goes a step further and provides explanations for these predictions. In this way, the obtained labels are more reliable because there is a corresponding basis. Meanwhile, with the help of the reasoning capability of large language models (LLMs), visual and acoustic clues can disambiguate the subtitle content and obtain multimodal descriptions containing rich emotions. Therefore, EMER also provides a promising way for open-vocabulary emotion recognition. Another motivation behind EMER is that emotions are related to multi-faceted clues, such as facial expressions, prosody, gestures (or micro-gestures), etc. It is inappropriate to identify emotions from just one aspect. EMER provides a general format for all emotion-related tasks, aiming to integrate multiple clues and generate more comprehensive descriptions. After converting all clues into text format, we can further exploit the reasoning ability of LLMs to generate accurate emotion predictions. To facilitate research in this area, we establish an initial dataset, baselines, and evaluation metrics. EMER provides a promising way to achieve reliable and open-vocabulary emotion recognition. In addition, researchers have started with attempts to extend LLMs to multi-modalities [7\u20139]. EMER can also serve as a benchmark dataset for evaluating the audio-text-video understanding capabilities of multimodal LLMs (MLLMs). The main contributions of this paper can be summarized as follows: \u2022 This paper proposes a new task, EMER. Compared to traditional emotion recognition, EMER further provides evidence for these predictions. From another perspective, EMER provides a general format for emotion-related tasks, aiming to integrate multiple clues to achieve more accurate emotion prediction. \u2022 To facilitate research in EMER, we build an initial dataset, develop baselines, and define evaluation metrics. \u2022 EMER is a promising approach to achieving reliable and open-vocabulary emotion recog- nition. At the same time, EMER can also serve as a foundational task to evaluate the audio-video-text understanding ability of MLLMs. The remainder of this paper is organized as follows: In Sections 2\u223c4, we establish the initial dataset, baselines, and evaluation metrics for EMER. In Section 5, we report the performance of different systems and perform an in-depth analysis. Finally, we conclude this paper and discuss future work in Section 6.", + "main_content": "To construct the dataset for EMER, we select samples from a multimodal emotion dataset, MER2023 [6]. During the annotation process, we need to label multi-faceted clues, which requires a lot of manual effort. To reduce costs, we randomly select 332 non-neutral samples for annotation. In the future, we will explore ways to reduce costs and further expand the dataset size. This section introduces the data annotation process and further analyzes the multi-faceted capabilities of the annotated results. 2.1 Data Annotation We have some findings during the annotation process: For videos in multimodal emotion recognition, the lexical content is generally short and colloquial, and the emotional expressions are relatively vague. But by combining visual and acoustic clues, we can disambiguate the lexical content and generate more accurate descriptions. Therefore, we mainly annotate visual and acoustic clues. In this paper, we use GPT-4V (\u201cgpt-4-vision-preview\u201d) for image processing and GPT-3.5 (\u201cgpt-3.5turbo-16k-0613\u201d) for text processing. This setup can reduce API call costs, as we observe similar performance between GPT-3.5 and GPT-4V in the text analysis involved in this paper. Figure 1 presents the pipeline of data annotation and Table 1 provides prompts involved in the data annotation. In Appendix A, we further provide an example to visualize the output of each step. Prelabel. First, we attempt to annotate visual and acoustic clues directly. However, the description obtained in this manner is short and cannot cover multi-faceted clues. Therefore, we use GPT-4V to provide initial annotations. Considering that GPT-4V only supports image and text input, we sample the video and use the prompt (see #1 in Table 1) to extract emotion-related visual clues, including facial expressions, gestures, etc. To obtain acoustic clues, we try converting the audio to a melspectrogram, but GPT-4V fails to generate proper responses on the mel-spectrogram. Considering 2 acoustic clue visual clue subtitle MER2023 select Pre-label Multimodal EMER Description Disambiguate Two-round Check lexical clue Merge Figure 1: Pipeline of data annotation. Table 1: Prompts involved in data annotation. ID/Task Prompt #1 Prelabel Visual Clue As an expert in the field of emotions, please focus on facial expressions, body language, environmental cues, and events in the video and predict the emotional state of the character. Please ignore the character\u2019s identity. We uniformly sample 3 frames from this video. Please consider the temporal relationship between these frames and provide a complete description of this video. Avoid using descriptions like \u201cthe first image\u201d, \u201cthe second image\u201d, and instead use terms like \u201cbeginning\u201d, \u201cmiddle\u201d, and \u201cend\u201d to denote the progression of time. #2 Prelabel Acoustic Clue Please assume the role of an expert in the field of emotions. We have a piece of text. Please analyze which parts of it can be used to infer the emotional states of the characters, and provide reasoning for your inference. #3 Disambiguation Please assume the role of an expert in the field of emotions. We provide audio and video cues that may be related to the emotions of the characters. Additionally, we provide the original subtitle of the video. Please analyze which parts of the subtitle can be used to infer the emotional states of the characters and provide reasoning for your inference. In the process of inference, please integrate the audio and video cues for analysis. #4 Open-vocabulary Generation Please assume the role of an expert in the field of emotions. We provide clues that may be related to the emotions of the characters. Based on the provided clues, please identify the emotional states of the main characters. Please separate different emotional categories with commas and output only the clearly identifiable emotional categories in list format. If none are identified, please output an empty list. that the lexical content in audio also contains emotional clues, we use the prompt (see #2 in Table 1) to obtain the initial acoustic clues. Two-round Check. Pre-labeled visual and acoustic clues contain some errors. For visual clues, GPT-4V may produce hallucinatory responses, i.e., it may contain some clues that are not present. For acoustic clues, lexical content is generally short and colloquial, and emotional clues based on it may be incorrect. Additionally, there are some duplicate expressions in the pre-labeled clues and some key clues are missing. Therefore, we hire two annotators and conduct two rounds of checks to obtain more reliable visual and acoustic clues. Multimodal EMER Description. In this step, we use the checked acoustic and visual clues to disambiguate the subtitle and obtain lexical clues (see #3 in Table 1). We observe that this operation can produce more accurate lexical clues. Then, we combine all clues to generate a multimodal EMER description, where we should replace \u201cvisual clues\u201d, \u201cacoustic clues\u201d, \u201csubtitle\u201d, and \u201clexical clues\u201d with their corresponding contents: In the video, {visual clues}; In the audio, {acoustic clues}; The subtitle of the video is {subtitle}; From the subtitle, {lexical clues}. Open-vocabulary Generation. We then extract emotion labels from the multimodal EMER descriptions (see #4 in Table 1). In this process, we do not limit the label space. Meanwhile, these labels are generally reliable because they are evidenced in the description. Therefore, EMER provides a path for reliable and open-vocabulary emotion recognition. 3 Table 2: Prompts for generating multi-faceted results from EMER descriptions. ID/Task Prompt #1 Visual Clue Analysis Please assume the role of an expert in the field of emotions. We provide clues related to the emotions of the characters in the video. Please output the facial movements and body gestures involved in the description, separated by commas. The output format should be in list form. #2 Discrete Emotion Rec. Please assume the role of an expert in the emotional domain. We provide clues that may be related to the emotions of the character. Based on the provided clues, identify the emotional states of the main characters. We provide a set of emotional candidates, please rank them in order of likelihood from high to low. Please output the sorted result directly. The candidate set is {Candidate Labels}. #3 Valence Estimation As an expert in the emotional domain, we provide clues that may be related to the emotions of characters. Based on the provided clues, please identify the overall positive or negative emotional polarity of the main characters. The output should be a floating-point number ranging from -5 to +5. Here, -5 indicates extremely negative emotions, 0 indicates neutral emotions, and +5 indicates extremely positive emotions. Larger numbers indicate more positive emotions, while smaller numbers indicate more negative emotions. Please provide your judgment as a floating-point number with two decimal places, directly outputting the numerical result without including the analysis process. #4 Open-vocabulary Emotion Rec. Please assume the role of an expert in the field of emotions. We provide clues that may be related to the emotions of the characters. Based on the provided clues, please identify the emotional states of the main characters. Please separate different emotional categories with commas and output only the clearly identifiable emotional categories in list format. If none are identified, please output an empty list. 2.2 EMER Description Analysis EMER contains a variety of emotion-related clues, such as gestures, facial expressions, lexical content, etc. In this section, we analyze EMER descriptions from different perspectives. Table 2 summarizes prompts for generating multi-faceted results. Experimental results are shown in Table 3. Visual Clue Analysis. EMER contains a variety of visual clues. In this part, we perform a statistical analysis of the number of visual clues. We use the prompt #1 in Table 2 to obtain all visual clues from the EMER description. We observe that some videos have no emotion-related visual clues, but on average each video has 4.95 visual clues. These results suggest that EMER descriptions often contain rich visual clues, which help determine the character\u2019s emotional state. Discrete Emotion Recognition. We first test the performance of EMER descriptions in discrete emotion recognition. Considering that MER2023 contains six discrete labels (i.e., worried, happy, neutral, angry, surprised, and sad), we use the #2 prompt in Table 2 and limit the candidate set to these categories. Experimental results show that the top-1/top-2 accuracy can reach 93.48/96.89. With further analysis, we find that these errors are caused by ranking errors in ChatGPT or inaccurate labels in MER2023. Therefore, EMER descriptions contain clues for discrete emotion recognition. Valence Estimation. We also validate the valence estimation results based on EMER descriptions. Specifically, we the #3 prompt in Table 2 to predict the valence score, ranging from -5 to 5, the same range as in MER2023. Then, we calculate the PCC score between the predicted and MER2023 labels. Meanwhile, we report the accuracy and weighted average f1-score (WAF) for positive/negative classification. In Table 3, we observe a relatively high correlation between the predicted and MER2023 labels. These results suggest that EMER descriptions also contain clues for valence estimation. Therefore, our EMER task unifies both discrete and dimensional emotion recognition. Open-vocabulary Emotion Recognition. We can also extract open-vocabulary emotion labels from the EMER description. Specifically, we use the #4 prompt in Table 2 to extract open-vocabulary emotion labels. Then, we calculate statistics on the extracted labels. We observe that there are 232 emotion labels in our initial EMER dataset, with an average of 2.92 labels per sample. Although some labels are synonyms, these results also prove that EMER descriptions contain more detailed emotion labels. 4 Table 3: Performance of EMER descriptions on different tasks. Task Metric Value Visual Clue Analysis Min # of clues per sample 0 Max # of clues per sample 14 Avg # of clues per sample 4.95 Discrete Emotion Rec. Top-1 accuracy 93.48 Top-2 accuracy 96.89 Valence Estimation PCC 0.881 Positive/Negative ACC 94.80 Positive/Negative WAF 94.86 Open-vocabulary Emotion Rec. # of candidate labels 232 Min # of labels per sample 1 Max # of labels per sample 12 Avg # of labels per sample 2.92 Table 4: Prompt for generating EMER-like descriptions from MLLMs. Models Prompt Audio LLM Subtitle content of the audio: {subtitle}; As an expert in the field of emotions, please focus on the acoustic information and subtitle content in the audio to discern clues related to the emotions of the individual. Please provide a detailed description and ultimately predict the emotional state of the individual in the audio. Video LLM Subtitle content of the video: {subtitle}; As an expert in the field of emotions, please focus on the facial expressions, body movements, environment, subtitle content, etc., in the video to discern clues related to the emotions of the individual. Please provide a detailed description and ultimately predict the emotional state of the individual in the video. Audio-Video LLM Subtitle content of the video: {subtitle} As an expert in the field of emotions, please focus on the facial expressions, body movements, environment, acoustic information, subtitle content, etc., in the video to discern clues related to the emotions of the individual. Please provide a detailed description and ultimately predict the emotional state of the individual in the video. 3 Baselines A straightforward way to address EMER is to use MLLMs, as these models are capable of handling various multimodal understanding tasks. Since emotion perception relies on temporal information, we mainly choose MLLMs that support video or audio inputs. Generally, the basic idea behind MLLMs is to align pre-trained models of other modalities to textual LLMs. For example, VideoChat [7] uses Q-Former [10] to map visual queries into textual embedding space. SALMONN [11] uses window-level Q-Former to align speech and audio encoders with LLMs. PandaGPT [9] uses ImageBind [12] to learn alignment between different modalities. After fine-tuning instruction datasets, MLLMs exhibit remarkable abilities to understand instructions and multimodal inputs. Table 4 provides prompts for MLLMs to generate EMER-like descriptions. For a fair comparison, we use similar prompts for audio, video, and audio-video LLMs. Meanwhile, we include subtitles as additional input in the prompts. 4 Evaluation Metrics The main purpose of EMER is to obtain rich and reliable emotion labels. Therefore, we evaluate the overlap rate between the predicted and annotated label sets. Since we do not fix the label space, MLLMs will generate labels with different expressions but similar meanings (such as happy and joy). Therefore, we first use ChatGPT to cluster all labels based on similarity and then calculate the evaluation results. These metrics are defined as ChatGPT-based metrics. Additionally, we employ commonly used metrics for evaluating natural language generation systems, including BLEU1, BLEU4, METEOR, and ROUGEl. 4.1 ChatGPT-base Metrics When calculating the label set overlap rate, we first group all labels using the following prompt: 5 Disambiguate acoustic clue visual clue subtitle Multi-GT lexical clue Merge acoustic clue subtitle Audio-GT lexical clue visual clue Video-GT subtitle Text-GT lexical clue (a) (b) (c) (d) Figure 2: Pipeline for generating unimodal and multimodal EMER descriptions. Prompt: Please assume the role of an expert in the field of emotions. We provide a set of emotions. Please group the emotions, with each group containing synonyms or consistent emotional terms. Directly output the results, with each group in list format. Specifically, assume G(\u00b7) is a function that maps each label to its group ID. The labels extracted from the EMER description are {yi}M i=1 and the labels extracted from the baselines are {\u02c6 yi}N i=1, where M and N are the number of labels in these two sets. During the calculation, we first map each label to its group ID: Y = {G(x)|x \u2208{yi}M i=1} and \u02c6 Y = {G(x)|x \u2208{\u02c6 yi}N i=1}. This process eliminates labels with different expressions but similar meanings. Then, we define the following two metrics: Accuracys = |Y \u2229\u02c6 Y| | \u02c6 Y| , (1) Recalls = |Y \u2229\u02c6 Y| |Y| . (2) Here, Accuracys represents how many predicted labels are correct; Recalls denotes whether the predicted labels cover all ground-truth labels. These two metrics are somewhat similar to traditional recall and precision but are calculated on the set. 4.2 Traditional Metrics We also use traditional metrics for evaluating natural language generation systems, including BLEU1, BLEU4, METEOR, and ROUGEl. These metrics primarily focus on word-level matching. The main reason for adopting these metrics is that ChatGPT-base metrics require API call costs. We try to analyze whether there is also a strong correlation between traditional metrics and ChatGPT-based metrics. If so, we can use traditional metrics to reduce evaluation costs. More discussions can be found in Section 5.2. 5 Experimental Results and Discussion In this section, we analyze the performance of different methods on EMER. In addition to MLLMs, we also introduce two heuristic baselines: Empty and Random. For the former, we predict each sample as \u201cUnable to judge the character\u2019s emotional state\u201d. For the latter, we randomly select a label from the candidate set of MER2023 (i.e., worried, happy, neutral, angry, surprised, sad) and generate the description, \u201cThrough the video, we can judge the character\u2019s emotional state is {emotion}\u201d. These two baselines show lower bounds on performance. Meanwhile, we also report the results of unimodal EMER descriptions, whose generation process is shown in Figure 2. Specifically, our initial dataset provides Multi-GT, which uses lexical and acoustic clues to disambiguate subtitles and obtains lexical clues. Considering that audio contains lexical content, we use the acoustic clue to disambiguate subtitles and obtain Audio-GT. For Text-GT, we can only infer the emotional state from subtitles. Therefore, we use #2 prompt in Table 1 to get lexical clues. Furthermore, we directly use visual clues as Video-GT. 5.1 Main Results This section reports the performance of different methods on EMER. Experimental results are shown in Table 5. In this table, we use the average of Accuracys and Recalls for the final ranking. We 6 Table 5: Performance of different methods on the EMER task. Model L V A Avg Accuracys Recalls BLEU1 BLEU4 METEOR ROUGEl Empty \u00d7 \u00d7 \u00d7 0.0 0.0 0.0 0.0 0.0 1.5 2.4 Random \u00d7 \u00d7 \u00d7 19.0 24.1 13.8 0.0 0.0 3.2 6.0 Valley [13] \u221a \u221a \u00d7 11.1 12.7 9.6 8.5 1.1 9.2 11.7 Otter [14] \u221a \u221a \u00d7 24.5 27.6 21.4 0.5 0.1 5.2 8.0 PandaGPT [9] \u221a \u221a \u221a 25.3 26.5 24.2 24.2 4.6 20.5 20.7 Video-LLaMA [8] \u221a \u221a \u00d7 28.4 27.3 29.4 33.2 5.3 22.1 20.0 VideoChat2 [15] \u221a \u221a \u00d7 32.9 36.8 28.9 6.4 1.3 12.4 14.4 Video-ChatGPT [16] \u221a \u221a \u00d7 35.3 39.0 31.7 25.1 5.0 20.2 19.9 VideoChat [7] \u221a \u221a \u00d7 35.9 34.3 37.5 34.8 6.6 27.8 19.8 SALMONN [11] \u221a \u00d7 \u221a 36.0 34.3 37.8 15.5 2.7 15.7 16.0 mPLUG-Owl [17] \u221a \u221a \u00d7 39.8 38.8 40.7 24.6 4.9 20.5 19.6 Qwen-Audio [18] \u221a \u00d7 \u221a 46.0 54.0 38.0 0.7 0.2 8.2 13.6 GPT-4V [19] \u221a \u221a \u00d7 56.0 53.2 58.9 45.4 29.1 53.8 40.4 Text-GT \u221a \u00d7 \u00d7 44.0 48.9 39.1 25.2 10.3 23.0 20.1 Video-GT \u00d7 \u221a \u00d7 60.3 63.1 57.5 58.2 42.7 51.9 49.4 Audio-GT \u00d7 \u00d7 \u221a 65.4 66.1 64.6 46.7 30.8 42.2 40.2 Figure 3: Correlation between different metrics. observe that most MLLMs outperform the heuristic baselines, suggesting that MLLMs can solve the EMER task to some extent. However, there is still a significant gap between the results of MLLMs and ground truth, which also highlights the limitations of existing MLLMs in this task. Furthermore, we observe that Audio-GT performs better than Video-GT. The same phenomenon can also be found in MERBench [2]. The main reason lies in that MER2023 focuses more on audio to convey emotions. 5.2 Metric Correlation Analysis In this section, we aim to answer the following question: whether ChatGPT-based metrics can be replaced with traditional metrics since ChatGPT requires API call costs. To this end, we calculate PCC scores between different metrics. Experimental results are shown in Figure 3. From this figure, we observe that the correlation within different ChatGPT-based metrics (or traditional metrics) is generally high. However, the correlation between these two metrics is generally low. Among all traditional metrics, ROUGEl shows the highest correlation with ChatGPT-based metrics. However, according to the results in Table 5, a higher ROUGEl score does not guarantee a better ranking. Therefore, there are certain differences between these metrics. We cannot replace ChatGPTbased metrics with traditional metrics. 6 Conclusion and Future Work This paper introduces a new task, EMER. Unlike traditional emotion recognition, EMER requires further evidence to support prediction results. By introducing this task, we aim to improve annotation 7 reliability and emotional richness, thereby making systems developed on this task meet requirements in practical applications. To facilitate research on EMER, we build an initial dataset, develop baselines, and define evaluation metrics. Experimental results demonstrate the difficulty of this task, as existing MLLMs struggle to obtain satisfactory results. Meanwhile, EMER descriptions contain multi-faceted and multi-modal clues (such as facial expressions, gestures, lexical content, etc) and can identify discrete and dimensional emotions simultaneously. Therefore, EMER can serve as a universal format for all emotion-related tasks. In the future, we plan to reduce the annotation cost and expand the dataset size. Meanwhile, we will enlarge the evaluation scope by including more MLLMs. Furthermore, we will design more effective frameworks to solve this task. 7 Acknowledge This work is supported by the National Natural Science Foundation of China (NSFC) (No.62201572, No.62276259, No.U21B2010, No.62271083, No.62306316, No.62322120). 8" + }, + { + "url": "http://arxiv.org/abs/2307.08581v1", + "title": "BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs", + "abstract": "LLMs have demonstrated remarkable abilities at interacting with humans\nthrough language, especially with the usage of instruction-following data.\nRecent advancements in LLMs, such as MiniGPT-4, LLaVA, and X-LLM, further\nenlarge their abilities by incorporating multi-modal inputs, including image,\nvideo, and speech. Despite their effectiveness at generating precise and\ndetailed language understanding of the given modality signal, these LLMs give\nup the ability to ground specific parts of inputs, thus only constructing a\ncoarse-grained mapping. However, explicit and informative correspondence\nbetween text and other modalities will not only improve the user experience but\nalso help to expand the application scenario of multi-modal LLMs. Therefore, we\npropose BuboGPT, a multi-modal LLM with visual grounding that can perform\ncross-modal interaction between vision, audio and language, providing\nfine-grained understanding of visual objects and other given modalities. As a\nresult, BuboGPT is able to point out the specific location of an object in the\nimage, when it is generating response or description for that object. Our\ncontributions are two-fold: 1) An off-the-shelf visual grounding module based\non SAM that extracts entities in a sentence and find corresponding masks in the\nimage. 2) A two-stage training scheme and instruction dataset to endow joint\ntext-image-audio understanding. Our experiments show that BuboGPT achieves\nimpressive multi-modality understanding and visual grounding abilities during\nthe interaction with human. It performs consistently well when provided by\narbitrary modality combinations (either aligned or unaligned). Our code, model\nand dataset are available at https://bubo-gpt.github.io .", + "authors": "Yang Zhao, Zhijie Lin, Daquan Zhou, Zilong Huang, Jiashi Feng, Bingyi Kang", + "published": "2023-07-17", + "updated": "2023-07-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "The large language models (LLMs) have made significant progress and demonstrated promising abilities in few-shot and zero-shot learning by leveraging instruct tuning [1] on carefully curated datasets. To harness the potential of LLMs beyond just language, some recent studies [2, 3, 4, 5, 6, 7, 8, 9, 10] successfully connect LLMs with more input signals (e.g., image, video, speech and audio), and build powerful multi-modal chatbots. However, these models often perform understanding without digging into the fine-grained relation between the visual objects and other given modalities. For example, when an illustrative figure is given, a visually-enhanced LLM will generate a high- quality description with rich details, but in a black-box manner. Instead, an instructive teacher-bot is going to show its audience which part of the figure it is referring to and what is happening there. Such visual grounding abilities are intriguing to LLMs but previously under-explored in the literature. In this paper, we propose BuboGPT, the first attempt to incorporate visual grounding into LLMs by relating visual objects with other modalities. Moreover, it is able to perform joint multi-modal understanding and chatting for text, vision and audio, which is achieved by learning a shared representation space that aligns well with pre-trained LLMs. arXiv:2307.08581v1 [cs.CV] 17 Jul 2023 Based on the image of a boy riding a bicycle, it appears that the source of the audio is the sound of a dog barking \u2026 creates a vivid and lively scene in the image. Vicuna Visual Grounding Linear Q-Former Find the sounding source. Linear Q-Former Figure 1: The overall framework of BuboGPT. To this end, we first build an off-the-shelf visual grounding pipeline based on SAM [11] to explore the fine-grained relation between different visual objects and modalities. The pipeline is composed of three modules, namely, a tagging module, a grounding module and a entity-matching module. The tagging module is a pre-trained modal [12] that can generate multiple text tags/labels that are relevant to the input image. The SAM-based [11] grounding module [13] further localize the semantic mask or box on the image for each tag/label. Then, the entity-matching module leverages the reasoning capabilities of LLMs to retrieve matched entities from tags and image descriptions. In this way, we connect visual objects and other modalities by using language as a bridge. Then, to unlock the multi-modal understanding ability for arbitrarily combined inputs, we employ a two-stage training scheme similar to Mini-GTP4 [2]. More specifically, we use ImageBind [14] as the audio encoder, BLIP-2 [15] as the vision encoder and Vicuna [16] as the LLM. In the first stage, we learn a Q-former to align vision or audio features with language on image or audio caption datasets respectively. In the second stage, we perform multi-modal instruct tuning on a high-quality instruction-following dataset. We observe that the construction of this dataset is crucial for the LLM to recognize whether a modality is provided and whether the input modalities are well matched with each other. Therefore, we devise a novel high-quality dataset, which is composed of four subsets: 1) vision instruction dataset; 2) audio instruction dataset; 3) sound localization dataset with positively paired image-audio examples; 4) image-audio captioning dataset with negative pairs. Note that by introducing the negative image-audio pairs for semantic reasoning, the BuboGPT can learn better multi-modal alignment and demonstrate stronger capabilities of joint understanding. Our experiments show that BuboGPT achieves impressive visual grounding abilities during multi- modal chat, even when arbitrary combinations of multi-modal inputs are provided, whether matched or unmatched. We summarize our key contributions as follows: \u2022 We build a multi-modal LLM, BuboGPT for multi-modal understanding including image, audio and text by learning a common semantic space and further explore the fine-grained relation between different visual objects and different modalities. \u2022 We construct a high-quality multi-modal instruction-tuning dataset including fine-grained audio descriptions and cross-modal sound localization, and introduce both positive and neg- ative image-audio pairs for semantic matching to facilitate the cross-modal understanding.", + "main_content": "Pre-trained LLMs in Mutli-modal Learning. Due to the scaling up of training data and model size, large language models [17, 18, 19, 16] have demonstrated remarkable abilities across various linguistic tasks in a few-shot and zero-shot manner and also enabled conversational communication with humans. To leverage the powerful linguistic abilities of LLMs, some methods [20, 21] propose to connect different accessedation models for multi-modal tasks by using LLMs as a dispatch scheduler. Based on high-quality multi-modal instruction-following data, recent end-to-end methods [2, 3, 4, 5, 6, 7, 8, 9, 10] have been introduced to extend LLMs for multi-modal learning as well. Some works such as Mini-GPT4 [2], X-LLM [3] and Video-ChatGPT [10] propose to align the input features of different modalities with pre-trained LLMs by learned visual encoder. Some works such as LLaMA-Adapter [5] and Otter [7] insert learnable cross-attention layers into the pre-trained LLMs ii to incorporate multi-modalities knowledge. These prior methods mainly focus on tackling visual inputs (e.g. videos and images) [2, 5, 6, 4, 9, 7] or ignoring the fine-grained relation between the visual objects and other given modalities [8, 3]. We further attempt to incorporate visual grounding into LLMs by relating visual objects with other modalities and propose to learn multi-modal alignment including image, audio and text in a common space. Multi-modal Instruction Tuning Dataset. To explore instruction tuning for multi-modal learning, [22] first introduces a multi-modal instruction tuning benchmark that is composed of 62 diverse multi-modal tasks in a unified seq-to-seq format. Mini-GPT4 [2] curates an instruction following dataset by combining Conceptual Caption [23, 24], SBU [25] and LAION [26] with hand-designed prompt, while LLaVA [6] proposes to use GPT-4 [17] to generate more detailed captions to expand COCO dataset [27]. Otter [7] further builds a multi-modal in-context tuning dataset to facilitate the in-context learning capabilities of multi-modal LLMs. Further, we build a high-quality instruction tuning dataset including fine-grained audio description and introduce the negative image-audio pairs for semantic reasoning to enhance the reasoning capabilities of our model. 3 Methods The overall framework of BuboGPT is presented in Figure 1. As the Figure shown, we perform joint multi-modal understanding and chatting for text, vision and audio, which is achieved by learning a shared representation space that aligns well with pre-trained Vicuna [16]. We also build an off-theshelf visual grounding pipeline to explore the fine-grained relation between different visual objects and modalities. 3.1 Visual Grounding Pipeline To explore the relation between different visual objects and input modalities, we further build the visual grounding pipeline, composed of a tagging module, a grounding module and a entity-matching module, as shown in Figure 2. Concretely, for a given image, we first use Recognize Anything Model (RAM) [12], a strong model based on Swin-transformer [28] for image tagging to generate relevant candidate tags, denoted as {t1, t2, ..., tnt}, where ti is the i-th semantic tag and nt is the number of detected tags. We then connect the tags with comma to form the prompt \u201ct1, t2, ..., tnt\u201d and use the Grounding DINO [13], a open-set object detection model with referring textual queries to identify the visual entities and the corresponding boxes relevant to the tags. Followed by Segment Anything Model (SAM) [11], the boxes are taken as prompt to get fine-grained semantic masks. With the tagging and grounding module, we then obtain all the visual entities and the corresponding grounding information, denoted as {(e1, g1), (e2, g2), ..., (ene, gne)}, where ei, gi are separately the i-th visual entities and grounding information (i.e. boxes and masks), ne is the number of entities. To model the relation between different visual entities and input modalities, we employ the text output to of our multi-modal LLM as the bridge and build a entitymatching module based on GPT-4 to retrieve the matching pairs. To construct the prompt template \u201ce1, e2, ..., ene,to\u201d, we utilize the powerful LLM for reasoning and retrieve the matching pairs, which reflects the relation between visual entities and input modalities. 3.2 Multi-Modal LLM Training BuboGPT considers the interaction between three modalities, i.e., text, vision and audio. It aligns a vision encoder and an audio encoder with the LLM with a Q-former for each modality. More specifically, we utilize the visual encoder together with the pre-trained Q-Former in BLIP-2 [15] and audio encoder in ImageBind [14] for visual and audio perception. For joint understanding over multiple modalities, we employ Vicuna as the LLM. We use a linear projection layer to connect the modality Q-Former with the LLM. To effectively train such a model, we develop the following two-stage training scheme. The modality encoders and Vicuna model with be fixed throughout the training procedure. Stage 1: Single-modal Pre-training Similar to MiniGPT-4 [2], the first stage is designed to align the output of the linear projection layer to the word embedding space of the LLM. This is achieved by training the modality Q-Former and linear projection layer on a large number of modality-text paired data. For visual perception, we only train the projection layer for image captioning with the Q-Former from BLIP2 fixed. For audio understanding, we jointly train the Q-Former and the projection layer iii ['dog', 'sheepdog', \u2026, 'field', 'corgi', 'grassy', 'brown dog', 'grass', 'park'] brown | chase | sheepdog | corgi | dog | field | \u2026 | grassy | park | run Tagging Module Grounding Module Multi-modal LLM Entity-matching Module What is the animal? A brown dog Figure 2: The pipeline of visual grounding that is composed of a tagging module, a grounding module and a entity-matching module. ###Human: What is the image? ###Assistant: ###Human: Pay attention to the audio and describe what you notice. ###Assistant: ###Human: Please find the source that emits the given sound in this image. ###Assistant: ###Human: Are the audio and image related to each other? What are they? ###Assistant: Table 1: Instruction-following prompt examples for various input sources. for audio captioning. There will not be any prompt used for both settings, the model just take the corresponding image or audio as input and predict the corresponding caption. Stage 2: Multi-Modal Instruct Tuning This stage aims to equip the multi-modal LLM with the ability to understand human instructions such that it can generate proper responses based on the given modality signal. To this end, we curate a high-quality multi-modal instruction-following dataset, which contains image-text, audio-text and image-audio-text pairs. To make the model adapt to arbitrary combination of input modalities, we design a general prompt as: ###Human: ###Assistant:. and are special identifiers for image and audio input. is going to be replaced by a sequence of image or audio tokens before feeding into the LLM. is the human instruction related to the input sigals for the LLM to assist on. We list a few examples for different combinations of input modalities in Tab. 1. We empirically accessed that when only positively paired image-audio data are included in this stage, the model always assumes the image and audio are related to each other even though random sampoles are used at test time. Therefore, we manually create some negative pairs and asking the LLM to tell what are they respectively. The experiments show that introducing such negative paired data is able to overcome this problem significantly. We leave the creation of datasets in the next section. 4 Datasets 4.1 Pretraining Datasets Following MiniGPT-4 [2], we use a combined dataset of CC3M [23], CC12M [24], SBU [29] and LAION [26] to train the visual projection layer, resulting in a total of 130 million image-text pairs. For audio, we mainly use the WaveCaps [30] dataset, which contains 403,050 audio clips with average duration of 67.59 seconds and average caption length of 7.8 words. It combines four iv datasets including FreeSound (262,300) [31], BBC Sound Effects (31,201)1, SoundBible (1,231)2 and AudioSet strongly-labelled subset (108,317) 3, and transform their raw-descriptions into captions with ChatGPT. 4.2 Instruction-Tuning Datasets 4.2.1 Image-Text Dataset We employ two previously published datasets for visual instruct tuning. The first one is released by MiniGPT-4, which contains 3,439 high-quality text-image pairs. The second one provided by LLaVA [6] is curated from 158K samples based on the COCO dataset, including three types of instructions, i.e., converstaions (58K), detailed description (23K) and complex reasnoning (77K). 4.2.2 Audio-Text Dataset When it comes to the field of audio understanding, we also need to conduct the instruction-tuning operation on the audio Q-former. However, unlike vision-language understanding, a severe need still exists for high-quality and well-organized instruction-tuning datasets in this field. To this end, we generate a series of expressive and descriptive data to facilitate this process. Specifically, we first investigate different kinds of existing audio caption datasets and select Clotho [32] as the original dataset to make the description extension. The reason can be explained in two folds. On the one hand, it has a moderate and acceptable scale to act as an instruction-tuning dataset, and the semantic range of audio is large enough. On the other hand, every audio has five short captions from different annotators, covering various possible scenes related to the audio and increasing the diversity of descriptions. After obtaining the original data, we need to rewrite the short captions into descriptive and imaginative paragraphs. Considering the extraordinary ability of GPT-4 in the field of few-shot learning, text generation, and complex reasoning, we utilize it to help us automatically assemble short captions into long descriptions to mitigate the reliance on human annotation. The final description is expected to cover all the related original captions. For example, given the series of captions [\u201cA person is turning a map over and over.\u201d, \u201cA person is very carefully wrapping a gift for someone else.\u201d, \u201cA person is very carefully wrapping a gift for someone else.\u201d, \u201cHe sighed as he turned the pages of the book, stopping to scan the information.\u201d, \u201cPapers are being turned, stopped, then turned again, and someone is breathing.\u201d], the description paragraph is expected to be \u201cA person is repeatedly flipping some papers. They might be reading a book, flipping through a map, or wrapping presents. Judging from the repeated flipping sounds, they are concentrating on repeating this action.\u201d. We design a task-related prompt and construct some few-shot examples like this to promote the in-context reasoning process. As a result, we collect a novel dataset Clotho-Detail 4 for instruction-tuning in audio understanding, which contains 3938 items and the average length of descriptions is 52.70 words. 4.2.3 Audio-Image-Text Dataset Positive Set In order to further empower our model with the comprehensive ability of multi-modal reasoning, we apply a group of audio-image pairs to help the model to understand the correspondence between the audio and its source. Among the existing audio-vision datasets, VGGSS [33] turns out to be a better choice in this process. It covers a wide range of sounding objects, and the audio only relates to a specific region in the corresponding image. Therefore, we retrieve all the data cases and use a group of fixed templates to wrap the corresponding class labels into natural sentence descriptions. As a result, we generate a total of 5,158 pairs to act as the triple-modality instruction tuning dataset 5. 1https://sound-effects.bbcrewind.co.uk/ 2https://soundbible.com/ 3https://research.google.com/audioset/download_strong.html 4https://huggingface.co/datasets/magicr/BuboGPT/blob/main/Clotho-detail-annotation. json 5https://huggingface.co/datasets/magicr/BuboGPT/blob/main/vggss-instruction-tuning. json v Negative Set As discussed in the method section (Sec. 3.2), relying solely on the above dataset causes the LLM fail to recognize irrelevant audio-image pairs. Therefore, we construct negative pairs such that gives independent descriptions for audio and image inputs. The audio is randomly sampled from the audio-text dataset presented in Sec. 4.2.2, while the image is randomly sampled from the MiniGPT-4 dataset discussed in Sec. 4.2.2. The text is constructed by concatenating the two captions that starts with \u201cThe image\u201d and \u201cThe audio\u201d. 5 Experiment Results In this section, we aim to answer the following two questions: 1) whether our BuboGPT is able to provide accurate and instructive visual grounding when the inputs contain images? 2) whether the modal is able to perceive arbitrary combinations of modalities and generate proper responses. We first consider using a single image as input for fine-grained visual understanding with grounding. As shown in Fig. 3-7, the model can accurately associate textural words or phrases with image regions in various scenarios with different complexities. Then, when a single audio clip is provided for audio understanding, BuboGPT gives informative descriptions covering nearly all acoustic parts included, even when some audio fragments are too short for humans to notice, see Fig. 8-13 for details. Next, we show that the model can perform sound localization with a matched audio-image pair provided, which gives a perfect example for aligned audio-image understanding. As illustrated in Fig. 14-17, the model is going to generate an overall description for both input image and audio, then point out which object in the image emits the sound after reasoning. It is worth noting that our model can give correct predictions when we provide different audio and keep the image unchanged. This demonstrates that our model can understand both modalities comprehensively rather than generate answers with prior bias from a single modality. Moreover, we empirically accessed that if the model is only tuned with well-aligned image-audio data, it actually fails to discriminate when an irrelevant image and audio pair is provided, resulting in a non-factual response that is not consistent with the given image or audio (Fig. 20). After introducing the negatively paired dataset as discussed in Sec. 4.2.3, the model can tell whether the image and audio are relevant to each other and generate high-quality response for arbitrary audio-image understanding, as evidenced by Fig. 18-19. 6 Conclusion In this report, we propose a multi-modal LLM, BuboGPT, which is capable of joint multi-modal understanding including image, audio and text, and perform more fine-grained understanding of multi-modal inputs by exploring the relation between different visual objects and modalities. We also build a high-quality instruction tuning dataset and the experiments show that BuboGPT achieves impressive visual grounding abilities during multi-modal chat, even when arbitrary combinations of multi-modal inputs are provided, whether matched or unmatched. 7 Limitations Language hallucination. Following prior works, our method is based on the pre-trained Vicuna model, which inherits the limitations of LLMs including generating non-existent knowledge or non-factual information. The problem might be resolved by training with more high-quality data and developing trustworthy LLMs. Inadequate capacities of Grounding QA. Since the connection between grounding results and different modalities is built upon text conversations without extra training, the capacities of QA on specific objects remain limited. The model can be improved by introducing fine-grained visual grounding datasets and considering the spatial location as extra input. vi Please describe the image. \u2460cat \u2461sunglasses \u2462float \u2463swimming pool The image shows a \u2460cat wearing \u2461sunglasses and sitting on an orange \u2462float in a \u2463swimming pool . The cat appears to be relaxed and enjoying its time in the water. Is this image funny and why? This image can be considered funny because it depicts a cat , a typically indoor pet, wearing sunglasses and laying in a swimming pool , which is a place that is typically associated with water activities for humans. It is an unexpected and unusual scene, which can make people chuckle. Additionally, the cat's expression and relaxed posture make it seem comfortable and content, adding to the humor of the image. Feel free to chat with Bubo-GPT ! Figure 3: Example 1 of fine-grained visual understanding with grounding. vii what is the image? \u2460two \u2461swans \u2462water \u2463beak The image shows \u2460two black \u2461swans swimming in the \u2462 water and forming a heart shape with their \u2463beak. The swans appear to be engaging in a playful or affectionate gesture as they move around each other gracefully, displaying their elegant black feathers and white beaks. It captures a moment of harmony and beauty in the natural world, where these creatures appear to be enjoying each other's company. what is the meaning of the behaviour? The behaviour of the black swans in the image is an act of affection, usually between a mated pair or between friends . It is a visual display of bonding and mutual affection. Swans are social birds and engage in various social behaviours to strengthen their relationships. In this particular case, the two black swans are standing next to each other and creating a heart shape by joining their beaks together. This behaviour might have been observed during breeding season or as a way of communication and recognition among mated pairs. Regardless of the specific context, the heart shape represents the bond between the two swans, and their act of creation reflects the beauty and harmony that nature can bring to our lives. Feel free to chat with Bubo-GPT ! Figure 4: Example 2 of fine-grained visual understanding with grounding. viii Describe the image. \u2460small kitten \u2461tote bag \u2462wooden table \u2463strap The image shows a \u2460small kitten in a white and brown color pattern laying in the bottom of a tan and brown leather purse or small \u2461tote bag on a \u2462wooden table . The kitten appears to be curled up and comfortably nestled in the bag. The bag itself has a \u2463strap on the side that allows the bag to be held and carried, likely making it an attractive and convenient option for the cat to nap and rest. Can you tell me why the cat laying down inside the bag? The kitten is laying down inside the purse or bag because it may feel comfortable or safe inside the bag. Cats often curl up and sleep in small spaces, like bags or boxes , as it provides a sense of security and comfort. The purse, with its zippered opening, may allow the cat to move around and get comfortable in the bag without being too confined or frightened by its size. Additionally, the soft material of the purse may mimic the texture of a cat's bedding , making it a desirable sleeping spot. Feel free to chat with Bubo-GPT ! Any other possible reason? Another possible reason why the kitten is laying down inside the bag is that it might have found a cozy and sheltered place to nap. The bag can act as an enclosed space that provides a sense of privacy for the cat, allowing it to rest without fear of being disturbed or startled. The soft interior of the bag might also be inviting for the kitten to rest and snuggle into. It is important to remember that cats love to curl up and hide , and a bag can offer them a comfortable space to do so. Overall, there are various reasons why a kitten might choose to sleep inside a purse or bag, and the comfort and privacy it provides can be a significant factor. Figure 5: Example 3 of of fine-grained visual understanding with grounding. ix what is the image? \u2460mouth \u2461chimpanzee \u2462grass \u2463woman The image shows a \u2463woman with a black fur and white hair sitting on the \u2462grass , holding the back of a \u2461chimpanzee. The woman is wearing a white shirt and is smiling. The chimpanzee is reaching its \u2460mouth towards the woman's mouth and the woman is making a kissing sound with her mouth open. What does the image suggest? The image suggests that humans and animals can have positive relationships and enjoy each other's company. It shows a woman playing with a chimpanzee , suggesting that they are engaging in a form of play or bonding activity. The image may suggest that it is possible for humans and animals to communicate and interact with each other in a friendly manner. Feel free to chat with Bubo-GPT ! Figure 6: Example 4 of of fine-grained visual understanding with grounding. x what is the image? \u2460frog \u2461banjo \u2462moss-covered tree stump The image is a small \u2460frog holding a \u2461banjo , with its legs spread out on either side of the banjo. The frog is sitting on top of a \u2462moss-covered tree stump. Anything interesting? Yes, there are several interesting features in the image. The first is the frog's intricate posing on the instrument. The frog appears to be playing the instrument with its legs , which adds an element of creativity and playfulness to the image. The second interesting aspect is the forest setting, which adds a sense of nature and serenity to the image. Finally, the mossy ground adds a touch of whimsy and makes the overall scene appear more rustic and charming. Feel free to chat with Bubo-GPT ! Figure 7: Example 5 of of fine-grained visual understanding with grounding. xi Please describe this audio. In this audio, a bell tower chimes in the background, creating a serene atmosphere, blending with the sounds of a breeze gently blowing through trees and birds singing in the distance . The combination of these elements provides a calming ambiance with a touch of natural harmony. Feel free to chat with Bubo-GPT ! Audio 1: chime, bird, breeze Figure 8: Example 1 of audio understanding. Audio 1 can be accessed via this link. Please describe this audio. The audio features a clear, melodic tune that resonates within the scene. The performance sounds as if it is happening in a spacious area, with the musician playing their instrument and creating a soothing atmosphere. The melody is made up of a series of chords, with different notes being played in unison. This audio captures the essence of a person creating music in an unspecified setting. Feel free to chat with Bubo-GPT ! Audio 2: music, box, melody Figure 9: Example 2 of audio understanding. Audio 2 can be accessed via this link. xii Please describe this audio. The audio depicts a lively atmosphere in a restaurant or gathering, where a group of people engage in conversations. Amidst the chatter, one individual intermittently joins the conversation. The multitude of voices creates a bustling, dynamic atmosphere, capturing the essence of social interaction and camaraderie. Feel free to chat with Bubo-GPT ! Audio 3: people talking Figure 10: Example 3 of audio understanding. Audio 3 can be accessed via this link. Please describe this audio. The audio captures the sound of someone typing rapidly and energetically on a keyboard, creating a rhythmic pattern of clicks and keyboard sounds. This intense activity could be associated with working on a computer or writing a long-form document. Feel free to chat with Bubo-GPT ! Audio 4: keyboard typing Could this audio come from a swimming pool? No, the audio does not suggest the sound of swimming or water. The keyboard clicking and typing sounds dominate the scene, as they would in an office or work environment. Figure 11: Example 4 of audio understanding. Audio 4 can be accessed via this link. xiii Please describe this audio. In the audio, rain is the primary focus, with heavy rain falling down and hitting the ground with force. This creates a vivid and immersive atmosphere of a stormy day, with rain pouring down and creating a dynamic soundscape. Additionally, there may be some distant sounds of traffic or vehicles passing by, adding further context to the ambiance. Feel free to chat with Bubo-GPT ! Audio 5: heavy rain If I\u2019m in the scene corresponding to the audio, what should I do first? Based on the audio, it appears to be a heavy rainstorm. If you\u2019re in the scene, you might first take cover from the rain, potentially by seeking shelter under a roof or an overhang, or by running for cover. Alternatively, you might simply enjoy the sound of the rain and take in the experience. Figure 12: Example 5 of audio understanding. Audio 5 can be accessed via this link. xiv Please describe this audio. The audio features a cacophony of loud crashes and the sound of glass shattering, suggesting a situation involving multiple instances of breaking glass. The sound is consistent with what could be produced by breaking glass objects, such as breaking wine glasses, a car back window, or breaking glass bottles. It could also represent the impact of breaking a pane of glass when a large object strikes it, producing a loud, jarring noise. Feel free to chat with Bubo-GPT ! Audio 6: breaking glasses Does this sound portend any potential danger? The audio doesn\u2019t explicitly convey any immediate danger, but the sound of breaking glass can be unsettling and potentially alarming. The shattering could be the result of an accident or intentional act, and the level of danger would depend on the situation. The sound of breaking glass can also be associated with certain types of events like explosions or breaking windows during a storm. In any case, it\u2019s important to pay attention and take appropriate precautions. Figure 13: Example 6 of audio understanding. Audio 6 can be accessed via this link. xv Try to find the sounding source in the image according to the given audio and generate a detailed description about it. Based on the provided image of a \u2460boy riding a \u2461bicycle , it appears that the source of the audio is the sound of a \u2462dog barking. The boy appears to be on a \u2463dirt road , likely traveling through a wooded area, as the tree branches are visible in the background. The dog, which is running alongside the boy's bicycle , is barking in an excited manner, suggesting that it may be chasing after the boy or simply running alongside him. This dynamic duo's adventurous journey through nature creates a vivid and lively scene in the image. Feel free to chat with Bubo-GPT ! Audio 7: dog barking \u2460boy \u2461bicycle \u2462dog \u2463dirt road Figure 14: Example 1 of aligned audio-image understanding. Audio 7 can be accessed via this link. xvi Try to find the sounding source in the image according to the given audio and generate a detailed description about it. The sound heard in the image appears to be a \u2460bicycle bell, which is the device located near the right rear wheel of the bike , and is associated with the action of riding the bicycle, particularly when the rider accelerates or slows down. The \u2461person on the bicycle is wearing a blue shirt and is either wearing \u2462a hat or has one on their head. Feel free to chat with Bubo-GPT ! Audio 8: bicycle bell \u2460bicycle \u2461person on the bicycle \u2462a hat Figure 15: Example 2 of aligned audio-image understanding. Audio 8 can be accessed via this link. xvii Try to find the sounding source in the image according to the given audio and generate a detailed description about it. The sound in the image can be attributed to a \u2460hair dryer . The \u2461woman in the picture is holding a hair dryer in her \u2462 hand and using it to dry her \u2463cat 's fur . The blowing air from the hair dryer can be heard, providing an auditory representation of the activity taking place in the image. Feel free to chat with Bubo-GPT ! Audio 9: hair dryer \u2460hair dryer \u2461woman \u2462hand \u2463cat Figure 16: Example 3 of aligned audio-image understanding. Audio 9 can be accessed via this link. xviii Try to find the sounding source in the image according to the given audio and generate a detailed description about it. The audio in the image is produced by a \u2460cat hissing at the \u2461person , who is either holding or near the cat. This can be identified through the sound of the cat's loud , agitated hissing as well as the context of the person interacting with the cat. Feel free to chat with Bubo-GPT ! Audio 10: cat meowing \u2460cat \u2461person Figure 17: Example 4 of aligned audio-image understanding. Audio 10 can be accessed via this link. xix Please tell me a story based on the given audio and image. In this audio, a clock strikes hour after hour, creating a haunting and mournful sound. The sound echoes through the forest, wafting through the trees and into the distance. The image depicts a \u2460mink perched on a mossy \u2461tree trunk. The mink stares intently into the distance, possibly listening to the toll of the clock, representing the passing of time. This atmosphere might suggest a sense of solitude, introspection, or reminiscence. Feel free to chat with Bubo-GPT ! Audio 11: continual bell chime \u2460mink \u2461tree trunk Figure 18: Example 1 of arbitrary audio-image understanding. Audio 11 can be accessed via this link. xx Please tell me a story based on the given audio and image. In this audio, there is a group of kids playing on the \u2460beach during \u2461sunset , enjoying the beautiful view and the gentle breeze. They can be seen running, playing, and laughing together, possibly creating cheerful and light-hearted memories that could last a lifetime. The serene environment, characterized by soothing sounds of seagulls and the \u2462gentle waves , provides a soothing backdrop to the lively and innocent activities. Feel free to chat with Bubo-GPT ! Audio 12: children playing \u2460beach \u2461sunset \u2462gentle waves Figure 19: Example 2 of arbitrary audio-image understanding. Audio 12 can be accessed via this link. xxi Please tell me a story based on the given audio and image. In the image, we see a small brown squirrel perched on a branch of a tree in the forest . The sound of wind rustling through the leaves provides the background ambiance. Suddenly, a bird starts chirping and singing a melody. The chirping gradually becomes louder and more persistent, creating a harmonious and peaceful atmosphere. As the bird sings, the squirrel starts moving around on the branch. Its ears perk up, and its tail is alert. Then, the squirrel suddenly stops and listens intently, its eyes focused on the bird's melody. The bird's song reaches its peak and then gradually fades away, leaving the forest silent. The squirrel remains on the branch, looking out into the woods, as if contemplating the beauty of nature and the solitude of its surroundings. The contrast between the peaceful bird song and the serene silence of the forest creates a captivating scene. This combination of the image and audio represents a moment of stillness and awe as the squirrel and bird share the beauty of their respective experiences in the forest. Feel free to chat with Bubo-GPT ! Audio 11: continual bell chime Figure 20: Failure case of arbitrary audio-image understanding without using negative audio-image pairs. xxii" + } + ] +} \ No newline at end of file